Amazon FSx for NetApp ONTAPのキャパシティプールストレージ上でデータが圧縮されている場合FSxのバックアップの課金対象は解凍後のデータサイズになるのか確認してみた

Amazon FSx for NetApp ONTAPのキャパシティプールストレージ上でデータが圧縮されている場合FSxのバックアップの課金対象は解凍後のデータサイズになるのか確認してみた

Tiering Policy Allのバックアップストレージのコストを削減するためだけにaggregateレイヤーのデータ削減を頑張る必要はなさそう
Clock Icon2024.01.09

バックアップストレージのコスト削減のためにInactive data compressionを実行しても徒労に終わるかも

こんにちは、のんピ(@non____97)です。

皆さんはAmazon FSx for NetApp ONTAP(以降FSxN)のキャパシティプールストレージ上でデータが圧縮されている場合FSxのバックアップの課金対象は解凍後のデータサイズになるのか気になったことはありますか? 私はあります。

料金表に記載のとおり、FSxNのバックアップストレージの料金は「1GBあたり0.050USD」とバックアップされたデータサイズによって課金されます。

FSxN料金表

抜粋 : Amazon FSx for NetApp ONTAP の料金 — AWS

また、以下記事で検証したとおり、FSxNのバックアップ機能では内部でSnapMirrorが動作していると認識しています。

そして、以下記事で検証しているとおり、SnapMirrorの転送元ボリュームのTiering PolicyがAllの場合 = データがキャパシティプールストレージである場合は、転送時にaggregateレイヤーのデータ削減効果を維持できないことを確認しました。

FSxNで有効になるStorage EfficiencyはTSSEです。TSSEの圧縮やコンパクションはaggregateレイヤーで処理されます。

以上のことから、データがキャパシティプールストレージ上にある場合は、aggregateレイヤーで圧縮されていたとしてもSnapMirrorで転送されるタイミングでデータ削減効果が失われるため、FSxNのバックアップの課金対象は解凍後のデータサイズになるのではないかと懸念しています。

言い換えると、「圧縮後のデータサイズが同じであってもSSDとキャパシティプールストレージのどの階層で保持しているかでバックアップの料金は変動してしまう」と考えます。

仮に1TBデータを保存したときの圧縮率で50%である場合、圧縮したことによる物理的なデータサイズは0.5TBになります。このデータをSSD上で保持している場合の課金対象は0.5TBで、キャパシティプールストレージに階層化してしまうと課金対象は1TBとなってしまうのでしょうか。

実際に検証してみました。

いきなりまとめ

  • 「データがキャパシティプールストレージ上にある場合、aggregateレイヤーで圧縮されていたとしてもSnapMirrorで転送されるタイミングでデータ削減効果が失われるため、FSxNのバックアップの課金対象は解凍後のデータサイズとなる」ということは発生しない
  • FSxNファイルシステムのaggregateレイヤーでのデータ削減効果はバックアップストレージ上では無視される
  • しかし、バックアップストレージ上で別のデータ削減効果が効く
    • aggregateレイヤーでのデータ削減効果の有無、データが場所がSSDかキャパシティプールかでバックアップストレージのサイズは変わらないことから推測
    • 「Inactive data compressionの実行の目的がバックアップストレージのコスト削減のみ」なのであれば、圧縮をしたとてもバックアップストレージのサイズに影響を与えないため、目的は達成できない
    • SSDに書き戻された際のSSDに物理消費量を抑えるという意味でInactive data compressionを有効化する動機にはなり得る
  • バックアップストレージのコストが高額で悩んでいるのであれば以下対応が考えられる
    • 重複排除による物理的なボリューム使用量の削減
    • FSxのバックアップではなく、FSxNファイルシステムを追加してSnapMirrorでバックアップ
  • Inactive data compressionを実行すると、aggregateの論理サイズが増加する
  • Inactive data compressionを実行すると、volume show-footprintAuto Adaptive Compressionだけでなくaggr showdata-compaction-savedも増加する
  • Inactive data compressionでaggregateのデータブロックが解放されるまでラグがある
  • 階層化をするとaggregateのSSDのAuto adaptive compressionやコンパクションのサイズが小さくなる
  • Inactive data compressionやコンパクションなどSSDでaggregateレイヤーのデータ削減効果が効いていなくとも、階層化する際にデータ削減効果を得られる
    • ただし、SSDに書き戻しても、キャパシティプールストレージ階層化時に得られたデータ削減効果は維持できない
    • 事前にInactive data compressionやコンパクションなどaggregateレイヤーのデータ削減効果が効いているデータについては、SSDに書き戻す際もデータ削減効果を維持できる
  • Inactive data compressionが効いたボリュームのバックアップからリストアしても、そのボリュームはデータ削減効果を維持した状態でリストアされない
    • リストア後に自身でInactive data compressionを効かせる必要がある
  • バックアップストレージのコストは。その月のバックアップストレージの平均使用量に基づいて計算される
  • 圧縮されにくいデータはデータ削減されずにバックアップストレージ上で保持される
  • バックアップ対象のデータがSSD上かキャパシティプールストレージ上にあるかはバックアップストレージのサイズに影響を与えない

検証環境

検証環境は以下のとおりです。

Amazon FSx for NetApp ONTAPのキャパシティプールストレージ上でデータが圧縮されている場合FSxのバックアップの課金対象は解凍後のデータサイズになるのか

バージニア北部(us-east-1)と大阪(ap-northeast-3)にFSxNファイルシステムを作成しました。

バージニア北部のFSxNではInactive data compressionをした後キャパシティプールストレージに階層化をし、バックアップを取得します。

大阪のFSxNではInactive data compressionをした後、そのままバックアップを取得します。

そのまま数日放置をしてCost Explorerで、それぞれのリージョンでどの程度の課金が発生するのか確認します。

バージニア北部のFSxNのONTAPのバージョンはONTAP 9.13.1P6です。

::> version
NetApp Release 9.13.1P6: Tue Dec 05 16:06:25 UTC 2023

大阪のFSxNのONTAPのONTAPのバージョンはONTAP 9.13.1P5です。

::> version
NetApp Release 9.13.1P5: Thu Nov 02 20:37:09 UTC 2023

デフォルトのバージニア北部のFSxNのStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

バージニア北部のFSxNのStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

::> set diag

Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y

::*> volume efficiency show
Vserver    Volume           State     Status      Progress           Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm        vol1             Disabled  Idle        Idle for 00:11:52  auto

::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used  percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ----- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   64GB 60.80GB   64GB            60.80GB 308KB 0%           0B                 0%                         0B                  308KB        0%                   -                 308KB               0B                                  0%

::*> volume show-footprint -volume vol1


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                             308KB       0%
             Footprint in Performance Tier             1.85MB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        107.5MB       0%
      Delayed Frees                                    1.55MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 109.3MB       0%

      Effective Total Footprint                       109.3MB       0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ff65f4e6a2c26228-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 128KB
                               Total Physical Used: 276KB
                    Total Storage Efficiency Ratio: 1.00:1
Total Data Reduction Logical Used Without Snapshots: 128KB
Total Data Reduction Physical Used Without Snapshots: 276KB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.00:1
Total Data Reduction Logical Used without snapshots and flexclones: 128KB
Total Data Reduction Physical Used without snapshots and flexclones: 276KB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 608KB
Total Physical Used in FabricPool Performance Tier: 4.58MB
Total FabricPool Performance Tier Storage Efficiency Ratio: 1.00:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 608KB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 4.58MB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
                Logical Space Used for All Volumes: 128KB
               Physical Space Used for All Volumes: 128KB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 276KB
              Physical Space Used by the Aggregate: 276KB
           Space Saved by Aggregate Data Reduction: 0B
                 Aggregate Data Reduction SE Ratio: 1.00:1
              Logical Size Used by Snapshot Copies: 0B
             Physical Size Used by Snapshot Copies: 0B
              Snapshot Volume Data Reduction Ratio: 1.00:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 1.00:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 1

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     860.6GB   861.8GB 1.12GB   44.79MB       0%                    0B                          0%                                  0B                   0B                           0B              0%                      0B               -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                              1.12GB         0%
      Aggregate Metadata                             3.95MB         0%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    46.48GB         5%

      Total Physical Used                           44.79MB         0%


      Total Provisioned Space                          65GB         7%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

デフォルトの大阪のFSxNのStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

::> set diag

Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y

::*> volume efficiency show
Vserver    Volume           State     Status      Progress           Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm        vol1             Disabled  Idle        Idle for 00:23:04  auto

::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used  percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ----- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   64GB 60.80GB   64GB            60.80GB 320KB 0%           0B                 0%                         0B                  320KB        0%                   -                 320KB               0B                                  0%

::*> volume show-footprint -volume vol1


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                             320KB       0%
             Footprint in Performance Tier             2.42MB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        107.5MB       0%
      Delayed Frees                                    2.11MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 109.9MB       0%

      Effective Total Footprint                       109.9MB       0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0f1302327a12b6488-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 488KB
                               Total Physical Used: 6.02MB
                    Total Storage Efficiency Ratio: 1.00:1
Total Data Reduction Logical Used Without Snapshots: 188KB
Total Data Reduction Physical Used Without Snapshots: 5.86MB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.00:1
Total Data Reduction Logical Used without snapshots and flexclones: 188KB
Total Data Reduction Physical Used without snapshots and flexclones: 5.86MB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 964KB
Total Physical Used in FabricPool Performance Tier: 6.67MB
Total FabricPool Performance Tier Storage Efficiency Ratio: 1.00:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 664KB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 6.51MB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
                Logical Space Used for All Volumes: 188KB
               Physical Space Used for All Volumes: 188KB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 6.02MB
              Physical Space Used by the Aggregate: 6.02MB
           Space Saved by Aggregate Data Reduction: 0B
                 Aggregate Data Reduction SE Ratio: 1.00:1
              Logical Size Used by Snapshot Copies: 300KB
             Physical Size Used by Snapshot Copies: 164KB
              Snapshot Volume Data Reduction Ratio: 1.83:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 1.83:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 1

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     860.6GB   861.8GB 1.12GB   71.03MB       0%                    0B                          0%                                  0B                   0B                           0B              0%                      0B               -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                              1.12GB         0%
      Aggregate Metadata                             5.80MB         0%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    46.48GB         5%

      Total Physical Used                           71.03MB         0%


      Total Provisioned Space                          65GB         7%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

テストファイルの作成

バージニア北部FSxNでのテストファイルの作成

まず、テストファイルを作成します。

ABCDEなど短いシンプルな文字列ではコンパクションが効いてしまう、/dev/urandomで生成したバイナリデータだと圧縮がほとんど効かないことが今までの検証の中で分かっています。

今回は/dev/urandomで生成したバイナリデータをBase64でエンコードした1KBの文字列を指定したバイト数分繰り返すことでテストファイルを用意します。

流石に1TiBのテストファイルを作成しようとすると時間がかかりそうなので、32GiBで作成します。

$ sudo mount -t nfs svm-0a3f4372347ab5e29.fs-0ff65f4e6a2c26228.fsx.us-east-1.amazonaws.com:/vol1 /mnt/fsxn/vol1

$ df -hT -t nfs4
Filesystem                                                                   Type  Size  Used Avail Use% Mounted on
svm-0a3f4372347ab5e29.fs-0ff65f4e6a2c26228.fsx.us-east-1.amazonaws.com:/vol1 nfs4   61G  320K   61G   1% /mnt/fsxn/vol1

$ yes \
  $(base64 /dev/urandom -w 0 \
    | head -c 1K
  ) \
  | tr -d '\n' \
  | sudo dd of=/mnt/fsxn/vol1/1KB_random_pattern_text_block_32GiB bs=4M count=8192 iflag=fullblock
8192+0 records in
8192+0 records out
34359738368 bytes (34 GB, 32 GiB) copied, 230.727 s, 149 MB/s

$ df -hT -t nfs4
Filesystem                                                                   Type  Size  Used Avail Use% Mounted on
svm-0a3f4372347ab5e29.fs-0ff65f4e6a2c26228.fsx.us-east-1.amazonaws.com:/vol1 nfs4   61G   33G   29G  53% /mnt/fsxn/vol1

$ ls -l /mnt/fsxn/vol1
total 33686544
-rw-r--r--. 1 root root 34359738368 Jan  4 23:29 1KB_random_pattern_text_block_32GiB

テストファイル作成後のバージニア北部のFSxNのStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

::*> volume efficiency show
Vserver    Volume           State     Status      Progress           Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm        vol1             Disabled  Idle        Idle for 00:21:41  auto

::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   64GB 28.66GB   64GB            60.80GB 32.14GB 52%          0B                 0%                         0B                  32.14GB      53%                  -                 32.14GB             0B                                  0%

::*> volume show-footprint -volume vol1


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.14GB       4%
             Footprint in Performance Tier            32.16GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Delayed Frees                                   20.43MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 32.47GB       4%

      Effective Total Footprint                       32.47GB       4%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ff65f4e6a2c26228-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 32.13GB
                               Total Physical Used: 32.13GB
                    Total Storage Efficiency Ratio: 1.00:1
Total Data Reduction Logical Used Without Snapshots: 32.13GB
Total Data Reduction Physical Used Without Snapshots: 32.13GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.00:1
Total Data Reduction Logical Used without snapshots and flexclones: 32.13GB
Total Data Reduction Physical Used without snapshots and flexclones: 32.13GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 32.14GB
Total Physical Used in FabricPool Performance Tier: 32.23GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 1.00:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 32.14GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 32.23GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
                Logical Space Used for All Volumes: 32.13GB
               Physical Space Used for All Volumes: 32.13GB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 32.13GB
              Physical Space Used by the Aggregate: 32.13GB
           Space Saved by Aggregate Data Reduction: 0B
                 Aggregate Data Reduction SE Ratio: 1.00:1
              Logical Size Used by Snapshot Copies: 0B
             Physical Size Used by Snapshot Copies: 0B
              Snapshot Volume Data Reduction Ratio: 1.00:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 1.00:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 1

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     828.2GB   861.8GB 33.51GB  32.35GB       4%                    0B                          0%                                  0B                   0B                           0B              0%                      0B               -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             33.48GB         4%
      Aggregate Metadata                            31.30MB         0%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    78.87GB         9%

      Total Physical Used                           32.35GB         4%


      Total Provisioned Space                          65GB         7%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

data-compaction-space-savedが0Bであることからコンパクションは効いていません。

Total Physical Usedも32.13GBとテストファイル分の物理サイズを消費していることが分かります。

大阪FSxNでのテストファイルの作成

同様に大阪FSxNでもテストファイルの作成をします。

$ sudo mkdir -p /mnt/fsxn/vol1

sh-5.2$ sudo mount -t nfs svm-0a7e0e36f5d9aebb9.fs-0f1302327a12b6488.fsx.ap-northeast-3.amazonaws.com:/vol1 /mnt/fsxn/vol1

sh-5.2$ df -hT -t nfs4
Filesystem                                                                        Type  Size  Used Avail Use% Mounted on
svm-0a7e0e36f5d9aebb9.fs-0f1302327a12b6488.fsx.ap-northeast-3.amazonaws.com:/vol1 nfs4   61G  320K   61G   1% /mnt/fsxn/vol1

$ yes \
  $(base64 /dev/urandom -w 0 \
    | head -c 1K
  ) \
  | tr -d '\n' \
  | sudo dd of=/mnt/fsxn/vol1/1KB_random_pattern_text_block_32GiB bs=4M count=8192 iflag=fullblock
8192+0 records in
8192+0 records out
34359738368 bytes (34 GB, 32 GiB) copied, 259.618 s, 132 MB/s

$ df -hT -t nfs4
Filesystem                                                                        Type  Size  Used Avail Use% Mounted on
svm-0a7e0e36f5d9aebb9.fs-0f1302327a12b6488.fsx.ap-northeast-3.amazonaws.com:/vol1 nfs4   61G   33G   29G  53% /mnt/fsxn/vol1
sh-5.2$
sh-5.2$ ls -l /mnt/fsxn/vol1
total 33686544
-rw-r--r--. 1 root root 34359738368 Jan  4 23:34 1KB_random_pattern_text_block_32GiB

テストファイル作成後の大阪のFSxNのStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

::*> volume efficiency show
Vserver    Volume           State     Status      Progress           Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm        vol1             Disabled  Idle        Idle for 00:31:12  auto

::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   64GB 28.66GB   64GB            60.80GB 32.14GB 52%          0B                 0%                         0B                  32.14GB      53%                  -                 32.14GB             0B                                  0%

::*> volume show-footprint -volume vol1


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.14GB       4%
             Footprint in Performance Tier            32.16GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Delayed Frees                                   19.68MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 32.47GB       4%

      Effective Total Footprint                       32.47GB       4%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0f1302327a12b6488-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 32.14GB
                               Total Physical Used: 32.17GB
                    Total Storage Efficiency Ratio: 1.00:1
Total Data Reduction Logical Used Without Snapshots: 32.14GB
Total Data Reduction Physical Used Without Snapshots: 32.17GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.00:1
Total Data Reduction Logical Used without snapshots and flexclones: 32.14GB
Total Data Reduction Physical Used without snapshots and flexclones: 32.17GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 32.14GB
Total Physical Used in FabricPool Performance Tier: 32.23GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 1.00:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 32.14GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 32.23GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
                Logical Space Used for All Volumes: 32.14GB
               Physical Space Used for All Volumes: 32.14GB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 32.17GB
              Physical Space Used by the Aggregate: 32.17GB
           Space Saved by Aggregate Data Reduction: 0B
                 Aggregate Data Reduction SE Ratio: 1.00:1
              Logical Size Used by Snapshot Copies: 300KB
             Physical Size Used by Snapshot Copies: 164KB
              Snapshot Volume Data Reduction Ratio: 1.83:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 1.83:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 1

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     828.2GB   861.8GB 33.51GB  32.38GB       4%                    0B                          0%                                  0B                   0B                           0B              0%                      0B               -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             33.48GB         4%
      Aggregate Metadata                            31.33MB         0%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    78.87GB         9%

      Total Physical Used                           32.38GB         4%


      Total Provisioned Space                          65GB         7%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

バージニア北部のFSxNと同じくコンパクションは効いておらず、テストファイル分だけaggregateを物理的に使用しています。

Inactive data compressionの実行

バージニア北部FSxNでのInactive data compressionの実行

続いてInactive data compressionを実行します。

::*> volume efficiency on -vserver svm -volume vol1
Efficiency for volume "vol1" of Vserver "svm" is enabled.

::*> volume efficiency modify -vserver svm -volume vol1 -compression true

::*> volume efficiency inactive-data-compression modify -vserver svm -volume vol1 -is-enabled true

::*> volume efficiency inactive-data-compression start -volume vol1 -inactive-days 0
Inactive data compression scan started on volume "vol1" in Vserver "svm"

::*> volume efficiency inactive-data-compression show -instance

                                                                Volume: vol1
                                                               Vserver: svm
                                                            Is Enabled: true
                                                             Scan Mode: default
                                                              Progress: RUNNING
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: 0%
                                                  Phase1 L1s Processed: 2227
                                                    Phase1 Lns Skipped:
                                                                        L1:     0
                                                                        L2:     0
                                                                        L3:     0
                                                                        L4:     0
                                                                        L5:     0
                                                                        L6:     0
                                                                        L7:     0
                                                   Phase2 Total Blocks: 0
                                               Phase2 Blocks Processed: 0
                                     Number of Cold Blocks Encountered: 567240
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 563912
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 0
             Time since Last Inactive Data Compression Scan ended(sec): 0
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 0
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%


::*> volume efficiency inactive-data-compression show -instance

                                                                Volume: vol1
                                                               Vserver: svm
                                                            Is Enabled: true
                                                             Scan Mode: default
                                                              Progress: RUNNING
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: 0%
                                                  Phase1 L1s Processed: 29576
                                                    Phase1 Lns Skipped:
                                                                        L1:     0
                                                                        L2:     0
                                                                        L3:     0
                                                                        L4:     0
                                                                        L5:     0
                                                                        L6:     0
                                                                        L7:     0
                                                   Phase2 Total Blocks: 0
                                               Phase2 Blocks Processed: 0
                                     Number of Cold Blocks Encountered: 7568304
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 7542400
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 0
             Time since Last Inactive Data Compression Scan ended(sec): 0
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 0
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%


::*> volume efficiency inactive-data-compression show -instance

                                                                Volume: vol1
                                                               Vserver: svm
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 8420336
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 8391672
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 30
             Time since Last Inactive Data Compression Scan ended(sec): 19
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 19
                           Average time for Cold Data Compression(sec): 11
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%

11秒ほどで完了しました。

Number of Compression Done Blocksは8,391,672です。ONTAPのデータブロックのサイズは4KiBなので、圧縮したデータサイズは8,391,672 × 4KiB / 1,024 / 1,024 ≒ 32.01GiBです。そのため、用意したテストファイルのデータブロック全てが圧縮されたことが分かります。

Inactive data compression実行後のバージニア北部のFSxNのStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

::*> volume efficiency show
Vserver    Volume           State     Status      Progress           Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm        vol1             Enabled   Idle        Idle for 00:25:51  auto

::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   64GB 28.66GB   64GB            60.80GB 32.14GB 52%          0B                 0%                         0B                  32.14GB      53%                  -                 32.14GB             0B                                  0%

::*> volume show-footprint -volume vol1


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.14GB       4%
             Footprint in Performance Tier            32.27GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Delayed Frees                                   132.5MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 32.58GB       4%

      Footprint Data Reduction                        30.90GB       3%
           Auto Adaptive Compression                  30.90GB       3%
      Effective Total Footprint                        1.68GB       0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ff65f4e6a2c26228-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 31.91GB
                               Total Physical Used: 15.36GB
                    Total Storage Efficiency Ratio: 2.08:1
Total Data Reduction Logical Used Without Snapshots: 31.91GB
Total Data Reduction Physical Used Without Snapshots: 15.36GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.08:1
Total Data Reduction Logical Used without snapshots and flexclones: 31.91GB
Total Data Reduction Physical Used without snapshots and flexclones: 15.36GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.08:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 32.14GB
Total Physical Used in FabricPool Performance Tier: 15.68GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 2.05:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 32.14GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 15.68GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.05:1
                Logical Space Used for All Volumes: 31.91GB
               Physical Space Used for All Volumes: 31.91GB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 45.98GB
              Physical Space Used by the Aggregate: 15.36GB
           Space Saved by Aggregate Data Reduction: 30.63GB
                 Aggregate Data Reduction SE Ratio: 2.99:1
              Logical Size Used by Snapshot Copies: 0B
             Physical Size Used by Snapshot Copies: 0B
              Snapshot Volume Data Reduction Ratio: 1.00:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 1.00:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 0

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     848.2GB   861.8GB 13.58GB  19.03GB       2%                    30.63GB                     69%                                 1.37GB               0B                           30.63GB         69%                     1.37GB           -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             33.59GB         4%
      Aggregate Metadata                            10.62GB         1%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    58.94GB         6%

      Total Physical Used                           19.03GB         2%


      Total Provisioned Space                          65GB         7%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

Total Physical Usedが32.13Gから15.36GBと半減しています。50%ほど圧縮されたと考えます。

気になるのはvolume show-footprintAuto Adaptive Compressionaggr show-efficienciySpace Saved by Aggregate Data Reductionaggr showdata-compaction-space-savedがいずれも約30GBであるところです。

Inactive data compressionを実行したため、Auto Adaptive Compressionが増加するのは理解できますが、data-compaction-savedも増加してしまっています。

Inactive data compressionと合わせてコンパクションも実行されるのでしょうか?

大阪FSxNでのInactive data compressionの実行

大阪FSxNでも同様の操作をします。

まず、Inactive data compressionを実行します。

::*> volume efficiency on -vserver svm -volume vol1
Efficiency for volume "vol1" of Vserver "svm" is enabled.

::*> volume efficiency modify -vserver svm -volume vol1 -compression true

::*> volume efficiency inactive-data-compression modify -vserver svm -volume vol1 -is-enabled true

::*> volume efficiency inactive-data-compression start -volume vol1 -inactive-days 0
Inactive data compression scan started on volume "vol1" in Vserver "svm"

::*> volume efficiency inactive-data-compression show -instance

                                                                Volume: vol1
                                                               Vserver: svm
                                                            Is Enabled: true
                                                             Scan Mode: default
                                                              Progress: RUNNING
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: 0%
                                                  Phase1 L1s Processed: 2392
                                                    Phase1 Lns Skipped:
                                                                        L1:     0
                                                                        L2:     0
                                                                        L3:     0
                                                                        L4:     0
                                                                        L5:     0
                                                                        L6:     0
                                                                        L7:     0
                                                   Phase2 Total Blocks: 0
                                               Phase2 Blocks Processed: 0
                                     Number of Cold Blocks Encountered: 610184
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 605952
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 0
             Time since Last Inactive Data Compression Scan ended(sec): 0
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 0
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%


::*> volume efficiency inactive-data-compression show -instance

                                                                Volume: vol1
                                                               Vserver: svm
                                                            Is Enabled: true
                                                             Scan Mode: default
                                                              Progress: RUNNING
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: 10%
                                                  Phase1 L1s Processed: 32722
                                                    Phase1 Lns Skipped:
                                                                        L1:     0
                                                                        L2:     0
                                                                        L3:     0
                                                                        L4:     0
                                                                        L5:     0
                                                                        L6:     0
                                                                        L7:     0
                                                   Phase2 Total Blocks: 16506912
                                               Phase2 Blocks Processed: 1656232
                                     Number of Cold Blocks Encountered: 8373760
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 8347136
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 0
             Time since Last Inactive Data Compression Scan ended(sec): 0
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 0
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%


::*> volume efficiency inactive-data-compression show -instance

                                                                Volume: vol1
                                                               Vserver: svm
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 8418432
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 8390720
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 17
             Time since Last Inactive Data Compression Scan ended(sec): 6
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 6
                           Average time for Cold Data Compression(sec): 11
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%

こちらでも8,390,720のデータブロックを圧縮していることから、用意した32GiBのテストファイルのほとんどのデータブロックが圧縮されたことが分かります。

Inactive data compression実行後の大阪のFSxNのStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

::*> volume efficiency show
Vserver    Volume           State     Status      Progress           Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm        vol1             Enabled   Idle        Idle for 00:39:49  auto

::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   64GB 28.66GB   64GB            60.80GB 32.14GB 52%          0B                 0%                         0B                  32.14GB      53%                  -                 32.14GB             0B                                  0%

::*> volume show-footprint -volume vol1


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.14GB       4%
             Footprint in Performance Tier            32.27GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Delayed Frees                                   133.1MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 32.58GB       4%

      Footprint Data Reduction                        30.91GB       3%
           Auto Adaptive Compression                  30.91GB       3%
      Effective Total Footprint                        1.67GB       0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0f1302327a12b6488-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 31.92GB
                               Total Physical Used: 30.54GB
                    Total Storage Efficiency Ratio: 1.05:1
Total Data Reduction Logical Used Without Snapshots: 31.92GB
Total Data Reduction Physical Used Without Snapshots: 30.54GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.05:1
Total Data Reduction Logical Used without snapshots and flexclones: 31.92GB
Total Data Reduction Physical Used without snapshots and flexclones: 30.54GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.05:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 32.14GB
Total Physical Used in FabricPool Performance Tier: 30.82GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 1.04:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 32.14GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 30.82GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.04:1
                Logical Space Used for All Volumes: 31.92GB
               Physical Space Used for All Volumes: 31.92GB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 61.17GB
              Physical Space Used by the Aggregate: 30.54GB
           Space Saved by Aggregate Data Reduction: 30.63GB
                 Aggregate Data Reduction SE Ratio: 2.00:1
              Logical Size Used by Snapshot Copies: 300KB
             Physical Size Used by Snapshot Copies: 164KB
              Snapshot Volume Data Reduction Ratio: 1.83:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 1.83:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 0

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     848.2GB   861.8GB 13.55GB  34.15GB       4%                    30.63GB                     69%                                 1.36GB               0B                           30.63GB         69%                     1.36GB           -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             33.59GB         4%
      Aggregate Metadata                            10.59GB         1%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    58.91GB         6%

      Total Physical Used                           34.15GB         4%


      Total Provisioned Space                          65GB         7%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

volume show-footprintAuto Adaptive Compressionaggr show-efficienciySpace Saved by Aggregate Data Reductionaggr showdata-compaction-space-savedがいずれも約30GBであるところはバージニア北部のFSxNと同じですね。

一方でTotal Physical Usedは30.54GBと全く減っていません。よくよく確認するとLogical Space Used by the Aggregateが61.17GBとaggregateの論理サイズが倍になっています。

Inactive data compressionを行う過程で、別のデータブロックにデータを書き込みそれを解放するのに時間がかかっているのでしょうか。

少し時間を置いてみましょう。

5分ほど放置した大阪のFSxNのStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

::*> volume efficiency show
Vserver    Volume           State     Status      Progress           Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm        vol1             Enabled   Idle        Idle for 00:43:10  auto

::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   64GB 28.66GB   64GB            60.80GB 32.14GB 52%          0B                 0%                         0B                  32.14GB      53%                  -                 32.14GB             0B                                  0%

::*> volume show-footprint -volume vol1


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.14GB       4%
             Footprint in Performance Tier            32.27GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Delayed Frees                                   133.1MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 32.58GB       4%

      Footprint Data Reduction                        30.91GB       3%
           Auto Adaptive Compression                  30.91GB       3%
      Effective Total Footprint                        1.67GB       0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0f1302327a12b6488-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 31.92GB
                               Total Physical Used: 8.70GB
                    Total Storage Efficiency Ratio: 3.67:1
Total Data Reduction Logical Used Without Snapshots: 31.92GB
Total Data Reduction Physical Used Without Snapshots: 8.70GB
Total Data Reduction Efficiency Ratio Without Snapshots: 3.67:1
Total Data Reduction Logical Used without snapshots and flexclones: 31.92GB
Total Data Reduction Physical Used without snapshots and flexclones: 8.70GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 3.67:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 32.14GB
Total Physical Used in FabricPool Performance Tier: 8.99GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 3.57:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 32.14GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 8.99GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.57:1
                Logical Space Used for All Volumes: 31.92GB
               Physical Space Used for All Volumes: 31.92GB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 39.34GB
              Physical Space Used by the Aggregate: 8.70GB
           Space Saved by Aggregate Data Reduction: 30.63GB
                 Aggregate Data Reduction SE Ratio: 4.52:1
              Logical Size Used by Snapshot Copies: 300KB
             Physical Size Used by Snapshot Copies: 164KB
              Snapshot Volume Data Reduction Ratio: 1.83:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 1.83:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 0

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     848.2GB   861.8GB 13.51GB  12.30GB       1%                    30.63GB                     69%                                 1.36GB               0B                           30.63GB         69%                     1.36GB           -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             33.59GB         4%
      Aggregate Metadata                            10.55GB         1%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    58.87GB         6%

      Total Physical Used                           12.30GB         1%


      Total Provisioned Space                          65GB         7%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

volume show-footprintAuto Adaptive Compressionaggr show-efficienciySpace Saved by Aggregate Data Reductionaggr showdata-compaction-space-savedがいずれも約30GBであるところは変わりありません。

一方でTotal Physical Used:が30.54GBから8.70GB、Logical Space Used by the Aggregate61.17GBから39.34GBと大幅に減っていました。

このことからInactive data compressionで削減されたデータブロックがaggregate上から解放されるまでは少し時間がかかることが分かります。

また、32GiB書き込んだうち物理的には9GiB弱しか消費していないため圧縮率は約72%ですね。

さらに17分放置した大阪のFSxNのStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

::*> volume efficiency show
Vserver    Volume           State     Status      Progress           Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm        vol1             Enabled   Idle        Idle for 01:00:24  auto

::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   64GB 28.66GB   64GB            60.80GB 32.14GB 52%          0B                 0%                         0B                  32.14GB      53%                  -                 32.14GB             0B                                  0%

::*> volume show-footprint -volume vol1


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.14GB       4%
             Footprint in Performance Tier            32.27GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Delayed Frees                                   133.2MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 32.58GB       4%

      Footprint Data Reduction                        30.91GB       3%
           Auto Adaptive Compression                  30.91GB       3%
      Effective Total Footprint                        1.67GB       0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0f1302327a12b6488-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 31.92GB
                               Total Physical Used: 8.58GB
                    Total Storage Efficiency Ratio: 3.72:1
Total Data Reduction Logical Used Without Snapshots: 31.92GB
Total Data Reduction Physical Used Without Snapshots: 8.58GB
Total Data Reduction Efficiency Ratio Without Snapshots: 3.72:1
Total Data Reduction Logical Used without snapshots and flexclones: 31.92GB
Total Data Reduction Physical Used without snapshots and flexclones: 8.58GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 3.72:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 32.14GB
Total Physical Used in FabricPool Performance Tier: 8.87GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 3.62:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 32.14GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 8.87GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.62:1
                Logical Space Used for All Volumes: 31.92GB
               Physical Space Used for All Volumes: 31.92GB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 39.21GB
              Physical Space Used by the Aggregate: 8.58GB
           Space Saved by Aggregate Data Reduction: 30.63GB
                 Aggregate Data Reduction SE Ratio: 4.57:1
              Logical Size Used by Snapshot Copies: 660KB
             Physical Size Used by Snapshot Copies: 304KB
              Snapshot Volume Data Reduction Ratio: 2.17:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.17:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 0

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     848.4GB   861.8GB 13.33GB  12.08GB       1%                    30.63GB                     70%                                 1.36GB               0B                           30.63GB         70%                     1.36GB           -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             33.59GB         4%
      Aggregate Metadata                            10.37GB         1%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    58.68GB         6%

      Total Physical Used                           12.08GB         1%


      Total Provisioned Space                          65GB         7%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

Total Physical Used:が8.70GBから8.58GB、Logical Space Used by the Aggregateが39.34GBから39.21GBと大きな変化はありませんでした。

キャパシティプールストレージへの階層化

それではバージニア北部のFSxNのデータをキャパシティプールに階層化します。

::*> volume show -volume vol1 -fields tiering-policy
vserver volume tiering-policy
------- ------ --------------
svm     vol1   none

::*> volume modify -vserver svm -volume vol1 -tiering-policy all
Volume modify successful on volume vol1 of Vserver svm.

::*> volume show -volume vol1 -fields tiering-policy
vserver volume tiering-policy
------- ------ --------------
svm     vol1   all

::*> volume show-footprint -volume vol1


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.14GB       4%
             Footprint in Performance Tier            32.27GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Delayed Frees                                   132.8MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 32.58GB       4%

      Footprint Data Reduction                        30.90GB       3%
           Auto Adaptive Compression                  30.90GB       3%
      Effective Total Footprint                        1.68GB       0%

::*> volume show-footprint -volume vol1


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.14GB       4%
             Footprint in Performance Tier            17.50GB      54%
             Footprint in FSxFabricpoolObjectStore
                                                      14.77GB      46%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Delayed Frees                                   139.4MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 32.59GB       4%

      Footprint Data Reduction                        16.76GB       2%
           Auto Adaptive Compression                  16.76GB       2%
      Footprint Data Reduction in capacity tier       13.74GB        -
      Effective Total Footprint                        2.09GB       0%

::*> volume show-footprint -volume vol1


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.14GB       4%
             Footprint in Performance Tier            419.1MB       1%
             Footprint in FSxFabricpoolObjectStore
                                                         32GB      99%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Delayed Frees                                   276.1MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 32.72GB       4%

      Footprint Data Reduction                        401.4MB       0%
           Auto Adaptive Compression                  401.4MB       0%
      Footprint Data Reduction in capacity tier       29.76GB        -
      Effective Total Footprint                        2.57GB       0%

99%のデータがキャパシティプールストレージに階層化されました。

また、Footprint Data Reduction in capacity tierが29.76GBであることから、ほとんどのデータを圧縮したまま階層化していることが分かります。

階層化後のバージニア北部のFSxNのStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

::*> volume efficiency show
Vserver    Volume           State     Status      Progress           Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm        vol1             Enabled   Idle        Idle for 00:32:10  auto

::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   64GB 28.66GB   64GB            60.80GB 32.14GB 52%          0B                 0%                         0B                  32.14GB      53%                  -                 32.14GB             -                                   -

::*> volume show-footprint -volume vol1


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.14GB       4%
             Footprint in Performance Tier            419.1MB       1%
             Footprint in FSxFabricpoolObjectStore
                                                         32GB      99%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Delayed Frees                                   276.1MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 32.72GB       4%

      Footprint Data Reduction                        401.4MB       0%
           Auto Adaptive Compression                  401.4MB       0%
      Footprint Data Reduction in capacity tier       29.76GB        -
      Effective Total Footprint                        2.57GB       0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ff65f4e6a2c26228-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 31.91GB
                               Total Physical Used: 9.48GB
                    Total Storage Efficiency Ratio: 3.37:1
Total Data Reduction Logical Used Without Snapshots: 31.91GB
Total Data Reduction Physical Used Without Snapshots: 9.48GB
Total Data Reduction Efficiency Ratio Without Snapshots: 3.37:1
Total Data Reduction Logical Used without snapshots and flexclones: 31.91GB
Total Data Reduction Physical Used without snapshots and flexclones: 9.48GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 3.37:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 416.0MB
Total Physical Used in FabricPool Performance Tier: 7.38GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 1.00:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 416.0MB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 7.38GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
                Logical Space Used for All Volumes: 31.91GB
               Physical Space Used for All Volumes: 31.91GB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 21.10GB
              Physical Space Used by the Aggregate: 9.48GB
           Space Saved by Aggregate Data Reduction: 11.62GB
                 Aggregate Data Reduction SE Ratio: 2.23:1
              Logical Size Used by Snapshot Copies: 0B
             Physical Size Used by Snapshot Copies: 0B
              Snapshot Volume Data Reduction Ratio: 1.00:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 1.00:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 0

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     859.8GB   861.8GB 1.99GB   9.19GB        1%                    11.62GB                     85%                                 531.6MB              2.25GB                       11.62GB         85%                     531.6MB          -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                              1.73GB         0%
      Aggregate Metadata                            11.88GB         1%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    47.34GB         5%

      Total Physical Used                            9.19GB         1%


      Total Provisioned Space                          65GB         7%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                  32.28GB          -
      Logical Referenced Capacity                   32.12GB          -
      Logical Unreferenced Capacity                 159.8MB          -
      Space Saved by Storage Efficiency             30.03GB          -

      Total Physical Used                            2.25GB          -



2 entries were displayed.

aggr show-efficiencySpace Saved by Aggregate Data Reductionaggr showdata-compaction-space-savedが30.63GBから11.62GBと大幅に減っています。

階層化をすると、Inactive data compressionやコンパクションのサイズが小さくみえるようです。

一方でaggr show-spaceObject Store: FSxFabricpoolObjectStoreTotal Physical Usedが2.25GBであることから、キャパシティプールストレージの物理使用量は論理サイズよりも大幅に少ないことは分かります。

こちらも時間を置いたら結果が変わるのでしょうか。

8分ほど放置した後のバージニア北部のFSxNのStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

::*> volume efficiency show
Vserver    Volume           State     Status      Progress           Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm        vol1             Enabled   Idle        Idle for 00:40:29  auto

::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   64GB 28.66GB   64GB            60.80GB 32.14GB 52%          0B                 0%                         0B                  32.14GB      53%                  -                 32.14GB             -                                   -

::*> volume show-footprint -volume vol1


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.14GB       4%
             Footprint in Performance Tier            419.1MB       1%
             Footprint in FSxFabricpoolObjectStore
                                                         32GB      99%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Delayed Frees                                   276.1MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 32.72GB       4%

      Footprint Data Reduction                        401.4MB       0%
           Auto Adaptive Compression                  401.4MB       0%
      Footprint Data Reduction in capacity tier       29.76GB        -
      Effective Total Footprint                        2.57GB       0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ff65f4e6a2c26228-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 31.91GB
                               Total Physical Used: 1.83GB
                    Total Storage Efficiency Ratio: 17.42:1
Total Data Reduction Logical Used Without Snapshots: 31.91GB
Total Data Reduction Physical Used Without Snapshots: 1.83GB
Total Data Reduction Efficiency Ratio Without Snapshots: 17.42:1
Total Data Reduction Logical Used without snapshots and flexclones: 31.91GB
Total Data Reduction Physical Used without snapshots and flexclones: 1.83GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 17.42:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 416.0MB
Total Physical Used in FabricPool Performance Tier: 1.34GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 1.00:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 416.0MB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 1.34GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
                Logical Space Used for All Volumes: 31.91GB
               Physical Space Used for All Volumes: 31.91GB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 13.34GB
              Physical Space Used by the Aggregate: 1.83GB
           Space Saved by Aggregate Data Reduction: 11.51GB
                 Aggregate Data Reduction SE Ratio: 7.29:1
              Logical Size Used by Snapshot Copies: 0B
             Physical Size Used by Snapshot Copies: 0B
              Snapshot Volume Data Reduction Ratio: 1.00:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 1.00:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 0

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     859.8GB   861.8GB 1.99GB   1.56GB        0%                    11.51GB                     85%                                 526.7MB              2.25GB                       11.51GB         85%                     526.7MB          -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                              1.73GB         0%
      Aggregate Metadata                            11.77GB         1%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    47.34GB         5%

      Total Physical Used                            1.56GB         0%


      Total Provisioned Space                          65GB         7%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                  32.28GB          -
      Logical Referenced Capacity                   32.12GB          -
      Logical Unreferenced Capacity                 159.8MB          -
      Space Saved by Storage Efficiency             30.03GB          -

      Total Physical Used                            2.25GB          -



2 entries were displayed.

aggr show-spaceObject Store: FSxFabricpoolObjectStoreTotal Physical Usedが2.25GBは変わりありませんが、Performance TierTotal Physical Usedが9.19GBから1.56Gと大幅に減りました。

階層化をしたデータブロックをSSDから解放したのだと考えます。

バックアップの取得

各リージョンのバックアップを取得します。

どちらも数分でバックアップが完了しました。

バージニア北部FSxNのバックアップ

Tiering Policy Allのボリュームのバックアップ

$ aws fsx describe-backups 
{
    "Backups": [
        {
            "BackupId": "backup-0064730931f4144f9",
            "Lifecycle": "AVAILABLE",
            "Type": "USER_INITIATED",
            "ProgressPercent": 100,
            "CreationTime": "2024-01-05T00:08:35.659000+00:00",
            "KmsKeyId": "arn:aws:kms:us-east-1:<AWSアカウントID>:key/73e96c0a-aeb6-4813-aae6-1882c899d445",
            "ResourceARN": "arn:aws:fsx:us-east-1:<AWSアカウントID>:backup/backup-0064730931f4144f9",
            "Tags": [
                {
                    "Key": "Name",
                    "Value": "non-97-backup-tiering-policy-all"
                }
            ],
            "OwnerId": "<AWSアカウントID>",
            "ResourceType": "VOLUME",
            "Volume": {
                "FileSystemId": "fs-0ff65f4e6a2c26228",
                "Lifecycle": "ACTIVE",
                "Name": "vol1",
                "OntapConfiguration": {
                    "JunctionPath": "/vol1",
                    "SizeInMegabytes": 65536,
                    "StorageEfficiencyEnabled": false,
                    "StorageVirtualMachineId": "svm-0a3f4372347ab5e29",
                    "TieringPolicy": {
                        "Name": "ALL"
                    },
                    "CopyTagsToBackups": false,
                    "VolumeStyle": "FLEXVOL",
                    "SizeInBytes": 68719476736
                },
                "ResourceARN": "arn:aws:fsx:us-east-1:<AWSアカウントID>:volume/fsvol-04e5537afcf487f72",
                "VolumeId": "fsvol-04e5537afcf487f72",
                "VolumeType": "ONTAP"
            }
        }
    ]
}

大阪FSxNのバックアップ

Tiering Policy Noneのボリュームのバックアップ

$ aws fsx describe-backups --region ap-northeast-3
{
    "Backups": [
        {
            "BackupId": "backup-0fde44fd2a2ff4a36",
            "Lifecycle": "AVAILABLE",
            "Type": "USER_INITIATED",
            "ProgressPercent": 100,
            "CreationTime": "2024-01-05T00:08:52.303000+00:00",
            "KmsKeyId": "arn:aws:kms:ap-northeast-3:<AWSアカウントID>:key/cc5bc947-b9fa-4614-8f7d-8ab0b5778679",
            "ResourceARN": "arn:aws:fsx:ap-northeast-3:<AWSアカウントID>:backup/backup-0fde44fd2a2ff4a36",
            "Tags": [
                {
                    "Key": "Name",
                    "Value": "non-97-backup-tiering-policy-none"
                }
            ],
            "OwnerId": "<AWSアカウントID>",
            "ResourceType": "VOLUME",
            "Volume": {
                "FileSystemId": "fs-0f1302327a12b6488",
                "Lifecycle": "ACTIVE",
                "Name": "vol1",
                "OntapConfiguration": {
                    "JunctionPath": "/vol1",
                    "SizeInMegabytes": 65536,
                    "StorageEfficiencyEnabled": false,
                    "StorageVirtualMachineId": "svm-0a7e0e36f5d9aebb9",
                    "TieringPolicy": {
                        "Name": "NONE"
                    },
                    "CopyTagsToBackups": false,
                    "VolumeStyle": "FLEXVOL",
                    "SizeInBytes": 68719476736
                },
                "ResourceARN": "arn:aws:fsx:ap-northeast-3:<AWSアカウントID>:volume/fsvol-00d2c13cfc7e7e490",
                "VolumeId": "fsvol-00d2c13cfc7e7e490",
                "VolumeType": "ONTAP"
            }
        }
    ]
}

バックアップ完了の時刻の情報は特にありません。

バックアップ完了後に管理アクティビティの監査ログを確認しましたが、バックアップ完了を示すようなログは見当たりませんでした。

::*> security audit log show -fields timestamp, node, application, vserver, username, input, state, message -state Error|Success -timestamp >"Thu Jan 05 00:00:00 2024"
timestamp                  node                      application vserver                username input                                                             state   message
-------------------------- ------------------------- ----------- ---------------------- -------- ----------------------------------------------------------------- ------- -------
"Fri Jan 05 00:07:03 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 admin    GET /api/private/cli/storage/failover?fields=node,possible,reason Success -
"Fri Jan 05 00:07:03 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 admin    GET /api/private/cli/storage/aggregate?fields=raidstatus%2Ccomposite%2Croot%2Cuuid
                                                                                                                                                                   Success -
"Fri Jan 05 00:08:52 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 POST /api/cluster/licensing/access_tokens/ : {"client_secret":***,"grant_type":"client_credentials","client_id":"clientId"}
                                                                                                                                                                   Success -
"Fri Jan 05 00:08:52 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 POST /api/snapmirror/relationships/?return_records=true : {"destination":{"path":"amazon-fsx-ontap-backup-us-east-1-3910b023-bb443720:/objstore/0c000000-020e-fd62-0000-000000758d2f","uuid":"0c000000-020e-fd62-0000-000000758d2f"},"policy":{"name":"FSxPolicy"},"source":{"path":"svm:vol1"}}
                                                                                                                                                                   Success -
"Fri Jan 05 00:08:52 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 POST /api/snapmirror/relationships : uuid=9bd08699-ab5e-11ee-ae76-134ae21cb0c8 isv_name="AWS FSx"
                                                                                                                                                                   Success -
"Fri Jan 05 00:08:52 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 POST /api/storage/volumes/4f234817-ab56-11ee-ae76-134ae21cb0c8/snapshots?return_records=true : {"name":"backup-0064730931f4144f9"}
                                                                                                                                                                   Success -
"Fri Jan 05 00:08:53 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 POST /api/cluster/licensing/access_tokens/ : {"client_secret":***,"grant_type":"client_credentials","client_id":"clientId"}
                                                                                                                                                                   Success -
"Fri Jan 05 00:08:53 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 POST /api/snapmirror/relationships/9bd08699-ab5e-11ee-ae76-134ae21cb0c8/transfers : isv_name="AWS FSx"
                                                                                                                                                                   Success -
"Fri Jan 05 00:08:53 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 POST /api/snapmirror/relationships/9bd08699-ab5e-11ee-ae76-134ae21cb0c8/transfers?return_records=true : {"source_snapshot":"backup-0064730931f4144f9"}
                                                                                                                                                                   Success -
"Fri Jan 05 00:17:03 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 admin    GET /api/private/cli/storage/failover?fields=node,possible,reason Success -
"Fri Jan 05 00:17:03 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 admin    GET /api/private/cli/storage/aggregate?fields=raidstatus%2Ccomposite%2Croot%2Cuuid
                                                                                                                                                                   Success -
11 entries were displayed.
::*> security audit log show -fields timestamp, node, application, vserver, username, input, state, message -state Error|Success -timestamp >"Thu Jan 05 00:00:00 2024"
timestamp                  node                      application vserver                username input                                                             state   message
-------------------------- ------------------------- ----------- ---------------------- -------- ----------------------------------------------------------------- ------- -------
"Fri Jan 05 00:05:25 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 admin    GET /api/private/cli/storage/failover?fields=node,possible,reason Success -
"Fri Jan 05 00:05:25 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 admin    GET /api/private/cli/storage/aggregate?fields=raidstatus%2Ccomposite%2Croot%2Cuuid
                                                                                                                                                                   Success -
"Fri Jan 05 00:08:59 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane
                                                                                                 POST /api/cluster/licensing/access_tokens/ : {"client_secret":***,"grant_type":"client_credentials","client_id":"clientId"}
                                                                                                                                                                   Success -
"Fri Jan 05 00:08:59 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane
                                                                                                 POST /api/snapmirror/relationships/?return_records=true : {"destination":{"path":"amazon-fsx-ontap-backup-ap-northeast-3-d78270f6-b557c480:/objstore/0cc00000-0059-77e6-0000-000000083bc6","uuid":"0cc00000-0059-77e6-0000-000000083bc6"},"policy":{"name":"FSxPolicy"},"source":{"path":"svm:vol1"}}
                                                                                                                                                                   Success -
"Fri Jan 05 00:08:59 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane
                                                                                                 POST /api/snapmirror/relationships : uuid=9fffb78f-ab5e-11ee-b1b8-195a72820387 isv_name="AWS FSx"
                                                                                                                                                                   Success -
"Fri Jan 05 00:09:00 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane
                                                                                                 POST /api/storage/volumes/ca2b4c3c-ab55-11ee-b1b8-195a72820387/snapshots?return_records=true : {"name":"backup-0fde44fd2a2ff4a36"}
                                                                                                                                                                   Success -
"Fri Jan 05 00:09:00 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane
                                                                                                 POST /api/cluster/licensing/access_tokens/ : {"client_secret":***,"grant_type":"client_credentials","client_id":"clientId"}
                                                                                                                                                                   Success -
"Fri Jan 05 00:09:00 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane
                                                                                                 POST /api/snapmirror/relationships/9fffb78f-ab5e-11ee-b1b8-195a72820387/transfers : isv_name="AWS FSx"
                                                                                                                                                                   Success -
"Fri Jan 05 00:09:00 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane
                                                                                                 POST /api/snapmirror/relationships/9fffb78f-ab5e-11ee-b1b8-195a72820387/transfers?return_records=true : {"source_snapshot":"backup-0fde44fd2a2ff4a36"}
                                                                                                                                                                   Success -
"Fri Jan 05 00:15:24 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 admin    GET /api/private/cli/storage/failover?fields=node,possible,reason Success -
"Fri Jan 05 00:15:25 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 admin    GET /api/private/cli/storage/aggregate?fields=raidstatus%2Ccomposite%2Croot%2Cuuid
                                                                                                                                                                   Success -
11 entries were displayed.

体感ではTiering Policy Allのバージニア北部のFSxNのバックアップの方が時間がかかった印象です。

バックアップからのリストアしたボリュームのaggregate消費量の確認

バージニア北部FSxNで取得したバックアップからのリストアしたボリュームの確認

バックアップストレージのサイズがCost Explorerに反映されるまで1日以上待機することになります。

待っている時間がもったいないので、バックアップからのリストアしたボリュームのaggregate消費量の確認を確認します。

バックアップからリストアしたボリュームはInactive data compressionによるデータ削減効果を維持できているでしょうか。

バージニア北部のバックアップからボリュームをリストアします。

Tiering Policy Allのバックアップからリストア

リストア時の管理アクティビティの監査ログは以下のとおりです。

::*> security audit log show -fields timestamp, node, application, vserver, username, input, state, message -state Error|Success -timestamp >"Thu Jan 05 01:00:00 2024"
timestamp                  node                      application vserver                username input                                                             state   message
-------------------------- ------------------------- ----------- ---------------------- -------- ----------------------------------------------------------------- ------- -------
"Fri Jan 05 01:02:03 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 admin    GET /api/private/cli/storage/failover?fields=node,possible,reason Success -
"Fri Jan 05 01:02:04 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 admin    GET /api/private/cli/storage/aggregate?fields=raidstatus%2Ccomposite%2Croot%2Cuuid
                                                                                                                                                                   Success -
"Fri Jan 05 01:08:10 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 POST /api/storage/volumes/?return_records=true : {"comment":"FSx.tmp.fsvol-05858fe57324cfd6e.49ce6842-0961-4770-b2a1-b424bdbe5ed5","language":"c.utf_8","name":"vol1_restored","size":68719476736,"tiering":{"policy":"NONE"},"type":"dp","aggregates":[{"name":"aggr1","uuid":"7b7e3000-ab55-11ee-ae76-134ae21cb0c8"}],"svm":{"name":"svm","uuid":"1e31cecd-ab56-11ee-ae76-134ae21cb0c8"}}
                                                                                                                                                                   Success -
"Fri Jan 05 01:08:15 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 GET /api/private/cli/vserver/cifs/check/?fields=status%2Cstatus_details
                                                                                                                                                                   Success -
"Fri Jan 05 01:08:21 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 PATCH /api/storage/volumes/e4c84520-ab66-11ee-ae76-134ae21cb0c8 : {"comment":""}
                                                                                                                                                                   Success -
"Fri Jan 05 01:08:21 2024" FsxId0ff65f4e6a2c26228-01 ssh         FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 set -privilege diagnostic                                         Success -
"Fri Jan 05 01:08:21 2024" FsxId0ff65f4e6a2c26228-01 ssh         FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 system node run -node FsxId0ff65f4e6a2c26228-01 -command wafl obj_cache flush
                                                                                                                                                                   Success -
"Fri Jan 05 01:08:21 2024" FsxId0ff65f4e6a2c26228-01 ssh         FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 Logging out                                                       Success -
"Fri Jan 05 01:08:21 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 POST /api/private/cli : {"input":"set -privilege diagnostic ; system node run -node FsxId0ff65f4e6a2c26228-01 -command wafl obj_cache flush"}
                                                                                                                                                                   Success -
"Fri Jan 05 01:08:21 2024" FsxId0ff65f4e6a2c26228-01 ssh         FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 set -privilege diagnostic                                         Success -
"Fri Jan 05 01:08:21 2024" FsxId0ff65f4e6a2c26228-01 ssh         FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 system node run -node FsxId0ff65f4e6a2c26228-02 -command wafl obj_cache flush
                                                                                                                                                                   Success -
"Fri Jan 05 01:08:21 2024" FsxId0ff65f4e6a2c26228-01 ssh         FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 Logging out                                                       Success -
"Fri Jan 05 01:08:21 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 POST /api/private/cli : {"input":"set -privilege diagnostic ; system node run -node FsxId0ff65f4e6a2c26228-02 -command wafl obj_cache flush"}
                                                                                                                                                                   Success -
"Fri Jan 05 01:08:22 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 POST /api/cluster/licensing/access_tokens/ : {"client_secret":***,"grant_type":"client_credentials","client_id":"clientId"}
                                                                                                                                                                   Success -
"Fri Jan 05 01:08:22 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 POST /api/snapmirror/relationships/?return_records=true : {"destination":{"path":"svm:vol1_restored"},"restore":true,"source":{"path":"amazon-fsx-ontap-backup-us-east-1-3910b023-bb443720:/objstore/0c000000-020e-fd62-0000-000000758d2f_rst","uuid":"0c000000-020e-fd62-0000-000000758d2f"}}
                                                                                                                                                                   Success -
"Fri Jan 05 01:08:22 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 POST /api/snapmirror/relationships : uuid=eb8953ac-ab66-11ee-ae76-134ae21cb0c8 isv_name="AWS FSx"
                                                                                                                                                                   Success -
"Fri Jan 05 01:08:22 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 POST /api/cluster/licensing/access_tokens/ : {"client_secret":***,"grant_type":"client_credentials","client_id":"clientId"}
                                                                                                                                                                   Success -
"Fri Jan 05 01:08:22 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 POST /api/snapmirror/relationships/eb8953ac-ab66-11ee-ae76-134ae21cb0c8/transfers : isv_name="AWSFSx"
                                                                                                                                                                   Success -
"Fri Jan 05 01:08:22 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 POST /api/snapmirror/relationships/eb8953ac-ab66-11ee-ae76-134ae21cb0c8/transfers?return_records=true : {"source_snapshot":"backup-0064730931f4144f9"}
                                                                                                                                                                   Success -
"Fri Jan 05 01:10:52 2024" FsxId0ff65f4e6a2c26228-01 ssh         FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 set -privilege diagnostic                                         Success -
"Fri Jan 05 01:10:52 2024" FsxId0ff65f4e6a2c26228-01 ssh         FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 volume efficiency inactive-data-compression stop -volume vol1_restored -vserver svm
                                                                                                                                                                   Success -
"Fri Jan 05 01:10:52 2024" FsxId0ff65f4e6a2c26228-01 ssh         FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 Logging out                                                       Success -
"Fri Jan 05 01:10:52 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 POST /api/private/cli : {"input":"set -privilege diagnostic ; volume efficiency inactive-data-compression stop -volume vol1_restored -vserver svm"}
                                                                                                                                                                   Success -
"Fri Jan 05 01:10:52 2024" FsxId0ff65f4e6a2c26228-01 ssh         FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 set -privilege diagnostic                                         Success -
"Fri Jan 05 01:10:52 2024" FsxId0ff65f4e6a2c26228-01 ssh         FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 volume efficiency inactive-data-compression modify -volume vol1_restored -vserver svm -is-enabledfalse
                                                                                                                                                                   Success -
"Fri Jan 05 01:10:52 2024" FsxId0ff65f4e6a2c26228-01 ssh         FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 Logging out                                                       Success -
"Fri Jan 05 01:10:52 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 POST /api/private/cli : {"input":"set -privilege diagnostic ; volume efficiency inactive-data-compression modify -volume vol1_restored -vserver svm -is-enabled false"}
                                                                                                                                                                   Success -
"Fri Jan 05 01:11:13 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 fsx-control-plane
                                                                                                 PATCH /api/storage/volumes/e4c84520-ab66-11ee-ae76-134ae21cb0c8 : {"tiering":{"policy":"NONE"},"nas":{"path":"/vol1_restored","security_style":"unix"},"efficiency":{"compression":"none","compaction":"none","dedupe":"none","cross_volume_dedupe":"none"},"snapshot_policy":{"name":"none"}}
                                                                                                                                                                   Success -
"Fri Jan 05 01:12:03 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 admin    GET /api/private/cli/storage/failover?fields=node,possible,reason Success -
"Fri Jan 05 01:12:04 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 admin    GET /api/private/cli/storage/aggregate?fields=raidstatus%2Ccomposite%2Croot%2Cuuid
                                                                                                                                                                   Success -
"Fri Jan 05 01:22:03 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 admin    GET /api/private/cli/storage/failover?fields=node,possible,reason Success -
"Fri Jan 05 01:22:03 2024" FsxId0ff65f4e6a2c26228-01 http        FsxId0ff65f4e6a2c26228 admin    GET /api/private/cli/storage/aggregate?fields=raidstatus%2Ccomposite%2Croot%2Cuuid
                                                                                                                                                                   Success -
32 entries were displayed.

リストア後のバージニア北部のFSxNのStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

::*> volume efficiency show
Vserver    Volume           State     Status      Progress           Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm        vol1             Enabled   Idle        Idle for 02:08:53  auto
svm        vol1_restored    Disabled  Idle        Idle for 00:07:23  -
2 entries were displayed.

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   64GB 28.66GB   64GB            60.80GB 32.14GB 52%          0B                 0%                         0B                  32.14GB      53%                  -                 32.14GB             -                                   -
svm     vol1_restored
               64GB 31.65GB   64GB            64GB    32.35GB 50%          98.58MB            0%                         4KB                 32.45GB      51%                  -                 32.45GB             0B                                  0%
2 entries were displayed.

::*> volume show-footprint -volume vol1*


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.14GB       4%
             Footprint in Performance Tier            421.1MB       1%
             Footprint in FSxFabricpoolObjectStore
                                                         32GB      99%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Delayed Frees                                   276.8MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 32.73GB       4%

      Footprint Data Reduction                        403.3MB       0%
           Auto Adaptive Compression                  403.3MB       0%
      Footprint Data Reduction in capacity tier       29.76GB        -
      Effective Total Footprint                        2.57GB       0%


      Vserver : svm
      Volume  : vol1_restored

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.35GB       4%
             Footprint in Performance Tier            32.37GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Deduplication Metadata                          500.7MB       0%
           Temporary Deduplication                    500.7MB       0%
      Delayed Frees                                   19.34MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 33.18GB       4%

      Effective Total Footprint                       33.18GB       4%
2 entries were displayed.

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ff65f4e6a2c26228-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 128.9GB
                               Total Physical Used: 27.07GB
                    Total Storage Efficiency Ratio: 4.76:1
Total Data Reduction Logical Used Without Snapshots: 64.35GB
Total Data Reduction Physical Used Without Snapshots: 27.07GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.38:1
Total Data Reduction Logical Used without snapshots and flexclones: 64.35GB
Total Data Reduction Physical Used without snapshots and flexclones: 27.07GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.38:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 65.72GB
Total Physical Used in FabricPool Performance Tier: 25.28GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 2.60:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 32.86GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 25.27GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.30:1
                Logical Space Used for All Volumes: 64.35GB
               Physical Space Used for All Volumes: 64.25GB
               Space Saved by Volume Deduplication: 98.58MB
Space Saved by Volume Deduplication and pattern detection: 98.58MB
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 37.31GB
              Physical Space Used by the Aggregate: 27.07GB
           Space Saved by Aggregate Data Reduction: 10.23GB
                 Aggregate Data Reduction SE Ratio: 1.38:1
              Logical Size Used by Snapshot Copies: 64.59GB
             Physical Size Used by Snapshot Copies: 764KB
              Snapshot Volume Data Reduction Ratio: 88649.54:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 88649.54:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 1

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     826.3GB   861.8GB 35.46GB  35.04GB       4%                    10.23GB                     22%                                 468.2MB              2.25GB                       10.23GB        22%                     468.2MB          -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             34.91GB         4%
      Aggregate Metadata                            10.78GB         1%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    80.81GB         9%

      Total Physical Used                           35.04GB         4%


      Total Provisioned Space                         129GB        14%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                  32.28GB          -
      Logical Referenced Capacity                   32.12GB          -
      Logical Unreferenced Capacity                 159.8MB          -
      Space Saved by Storage Efficiency             30.03GB          -

      Total Physical Used                            2.25GB          -



2 entries were displayed.

aggr show-spacePerformance TierTotal Physical Usedが1.56GBから35.04GBになっていることからaggregateレイヤーでのデータ削減効果は失われていることが分かります。

7時間放置した後のバージニア北部のFSxNのStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

::*> volume efficiency show
Vserver    Volume           State     Status      Progress           Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm        vol1             Enabled   Idle        Idle for 08:57:14  auto
svm        vol1_restored    Disabled  Idle        Idle for 06:55:44  -
2 entries were displayed.

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   64GB 28.66GB   64GB            60.80GB 32.14GB 52%          0B                 0%                         0B                  32.14GB      53%                  -                 32.14GB             -                                   -
svm     vol1_restored
               64GB 31.65GB   64GB            64GB    32.35GB 50%          98.58MB            0%                         4KB                 32.45GB      51%                  -                 32.45GB             0B                                  0%
2 entries were displayed.

::*> volume show-footprint -volume vol1*


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.14GB       4%
             Footprint in Performance Tier            421.1MB       1%
             Footprint in FSxFabricpoolObjectStore
                                                         32GB      99%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Delayed Frees                                   276.8MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 32.73GB       4%

      Footprint Data Reduction                        403.3MB       0%
           Auto Adaptive Compression                  403.3MB       0%
      Footprint Data Reduction in capacity tier       29.76GB        -
      Effective Total Footprint                        2.57GB       0%


      Vserver : svm
      Volume  : vol1_restored

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.35GB       4%
             Footprint in Performance Tier            32.37GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Deduplication Metadata                          500.7MB       0%
           Temporary Deduplication                    500.7MB       0%
      Delayed Frees                                   19.50MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 33.18GB       4%

      Effective Total Footprint                       33.18GB       4%
2 entries were displayed.

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ff65f4e6a2c26228-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 128.9GB
                               Total Physical Used: 30.64GB
                    Total Storage Efficiency Ratio: 4.21:1
Total Data Reduction Logical Used Without Snapshots: 64.35GB
Total Data Reduction Physical Used Without Snapshots: 30.64GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.10:1
Total Data Reduction Logical Used without snapshots and flexclones: 64.35GB
Total Data Reduction Physical Used without snapshots and flexclones: 30.64GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.10:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 65.72GB
Total Physical Used in FabricPool Performance Tier: 28.94GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 2.27:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 32.86GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 28.94GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.14:1
                Logical Space Used for All Volumes: 64.35GB
               Physical Space Used for All Volumes: 64.25GB
               Space Saved by Volume Deduplication: 98.58MB
Space Saved by Volume Deduplication and pattern detection: 98.58MB
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 35.83GB
              Physical Space Used by the Aggregate: 30.64GB
           Space Saved by Aggregate Data Reduction: 5.19GB
                 Aggregate Data Reduction SE Ratio: 1.17:1
              Logical Size Used by Snapshot Copies: 64.59GB
             Physical Size Used by Snapshot Copies: 1.31MB
              Snapshot Volume Data Reduction Ratio: 50545.79:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 50545.79:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 1

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     826.2GB   861.8GB 35.53GB  35.75GB       4%                    5.19GB                      13%                                 237.4MB              2.25GB                       5.19GB        13%                     237.4MB          -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             34.91GB         4%
      Aggregate Metadata                             5.80GB         1%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    80.88GB         9%

      Total Physical Used                           35.75GB         4%


      Total Provisioned Space                         129GB        14%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                  32.28GB          -
      Logical Referenced Capacity                   32.12GB          -
      Logical Unreferenced Capacity                 159.8MB          -
      Space Saved by Storage Efficiency             30.03GB          -

      Total Physical Used                            2.25GB          -



2 entries were displayed.

7時間経ってもaggr show-spacePerformance TierTotal Physical Usedは35.75GBです。圧縮はされていませんね。

大阪FSxNで取得したバックアップからのリストアしたボリュームの確認

大阪FSxNでも同様にバックアップからボリュームをリストアします。

Tiering Policy Noneのバックアップからリストア

リストア時の管理アクティビティの監査ログは以下のとおりです。

::*> security audit log show -fields timestamp, node, application, vserver, username, input, state, message -state Error|Success -timestamp >"Thu Jan 05 01:00:00 2024"
timestamp                  node                      application vserver                username          input                                                                   state   message
-------------------------- ------------------------- ----------- ---------------------- ----------------- ----------------------------------------------------------------------- ------- -------
"Fri Jan 05 01:03:47 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane GET /api/private/cli/vserver/cifs/check/?fields=status%2Cstatus_details Success -
"Fri Jan 05 01:05:25 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 admin             GET /api/private/cli/storage/failover?fields=node,possible,reason       Success -
"Fri Jan 05 01:05:25 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 admin             GET /api/private/cli/storage/aggregate?fields=raidstatus%2Ccomposite%2Croot%2Cuuid
                                                                                                                                                                                  Success -
"Fri Jan 05 01:05:59 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane POST /api/storage/volumes/?return_records=true : {"comment":"FSx.tmp.fsvol-07c86bb48288498f5.9a1544b1-d3a5-41db-9fb3-92a1ab295dfb","language":"c.utf_8","name":"vol1_restored","size":68719476736,"tiering":{"policy":"NONE"},"type":"dp","aggregates":[{"name":"aggr1","uuid":"09e00157-ab55-11ee-b1b8-195a72820387"}],"svm":{"name":"svm","uuid":"9934e544-ab55-11ee-b1b8-195a72820387"}}
                                                                                                                                                                                  Success -
"Fri Jan 05 01:06:10 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane PATCH /api/storage/volumes/96a24aa2-ab66-11ee-b1b8-195a72820387 : {"comment":""}
                                                                                                                                                                                  Success -
"Fri Jan 05 01:06:10 2024" FsxId0f1302327a12b6488-01 ssh         FsxId0f1302327a12b6488 fsx-control-plane set -privilege diagnostic                                               Success -
"Fri Jan 05 01:06:10 2024" FsxId0f1302327a12b6488-01 ssh         FsxId0f1302327a12b6488 fsx-control-plane system node run -node FsxId0f1302327a12b6488-01 -command wafl obj_cache flush
                                                                                                                                                                                  Success -
"Fri Jan 05 01:06:10 2024" FsxId0f1302327a12b6488-01 ssh         FsxId0f1302327a12b6488 fsx-control-plane Logging out                                                             Success -
"Fri Jan 05 01:06:10 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane POST /api/private/cli : {"input":"set -privilege diagnostic ; system node run -node FsxId0f1302327a12b6488-01 -command wafl obj_cache flush"}
                                                                                                                                                                                  Success -
"Fri Jan 05 01:06:10 2024" FsxId0f1302327a12b6488-01 ssh         FsxId0f1302327a12b6488 fsx-control-plane set -privilege diagnostic                                               Success -
"Fri Jan 05 01:06:10 2024" FsxId0f1302327a12b6488-01 ssh         FsxId0f1302327a12b6488 fsx-control-plane system node run -node FsxId0f1302327a12b6488-02 -command wafl obj_cache flush
                                                                                                                                                                                  Success -
"Fri Jan 05 01:06:10 2024" FsxId0f1302327a12b6488-01 ssh         FsxId0f1302327a12b6488 fsx-control-plane Logging out                                                             Success -
"Fri Jan 05 01:06:10 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane POST /api/private/cli : {"input":"set -privilege diagnostic ; system node run -node FsxId0f1302327a12b6488-02 -command wafl obj_cache flush"}
                                                                                                                                                                                  Success -
"Fri Jan 05 01:06:10 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane POST /api/cluster/licensing/access_tokens/ : {"client_secret":***,"grant_type":"client_credentials","client_id":"clientId"}
                                                                                                                                                                                  Success -
"Fri Jan 05 01:06:11 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane POST /api/snapmirror/relationships/?return_records=true : {"destination":{"path":"svm:vol1_restored"},"restore":true,"source":{"path":"amazon-fsx-ontap-backup-ap-northeast-3-d78270f6-b557c480:/objstore/0cc00000-0059-77e6-0000-000000083bc6_rst","uuid":"0cc00000-0059-77e6-0000-000000083bc6"}}
                                                                                                                                                                                  Success -
"Fri Jan 05 01:06:11 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane POST /api/snapmirror/relationships : uuid=9d5c1512-ab66-11ee-b1b8-195a72820387 isv_name="AWS FSx"
                                                                                                                                                                                  Success -
"Fri Jan 05 01:06:11 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane POST /api/cluster/licensing/access_tokens/ : {"client_secret":***,"grant_type":"client_credentials","client_id":"clientId"}
                                                                                                                                                                                  Success -
"Fri Jan 05 01:06:11 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane POST /api/snapmirror/relationships/9d5c1512-ab66-11ee-b1b8-195a72820387/transfers : isv_name="AWS FSx"
                                                                                                                                                                                  Success -
"Fri Jan 05 01:06:11 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane POST /api/snapmirror/relationships/9d5c1512-ab66-11ee-b1b8-195a72820387/transfers?return_records=true : {"source_snapshot":"backup-0fde44fd2a2ff4a36"}
                                                                                                                                                                                  Success -
"Fri Jan 05 01:09:01 2024" FsxId0f1302327a12b6488-01 ssh         FsxId0f1302327a12b6488 fsx-control-plane set -privilege diagnostic                                               Success -
"Fri Jan 05 01:09:01 2024" FsxId0f1302327a12b6488-01 ssh         FsxId0f1302327a12b6488 fsx-control-plane volume efficiency inactive-data-compression stop -volume vol1_restored -vserver svm
                                                                                                                                                                                  Success -
"Fri Jan 05 01:09:01 2024" FsxId0f1302327a12b6488-01 ssh         FsxId0f1302327a12b6488 fsx-control-plane Logging out                                                             Success -
"Fri Jan 05 01:09:01 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane POST /api/private/cli : {"input":"set -privilege diagnostic ; volume efficiency inactive-data-compression stop -volume vol1_restored -vserver svm"}
                                                                                                                                                                                  Success -
"Fri Jan 05 01:09:01 2024" FsxId0f1302327a12b6488-01 ssh         FsxId0f1302327a12b6488 fsx-control-plane set -privilege diagnostic                                               Success -
"Fri Jan 05 01:09:01 2024" FsxId0f1302327a12b6488-01 ssh         FsxId0f1302327a12b6488 fsx-control-plane volume efficiency inactive-data-compression modify -volume vol1_restored -vserver svm -is-enabled false
                                                                                                                                                                                  Success -
"Fri Jan 05 01:09:01 2024" FsxId0f1302327a12b6488-01 ssh         FsxId0f1302327a12b6488 fsx-control-plane Logging out                                                             Success -
"Fri Jan 05 01:09:01 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane POST /api/private/cli : {"input":"set -privilege diagnostic ; volume efficiency inactive-data-compression modify -volume vol1_restored -vserver svm -is-enabled false"}
                                                                                                                                                                                  Success -
"Fri Jan 05 01:09:22 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane PATCH /api/storage/volumes/96a24aa2-ab66-11ee-b1b8-195a72820387 : {"tiering":{"policy":"NONE"},"nas":{"path":"/vol1_restored","security_style":"unix"},"efficiency":{"compression":"none","compaction":"none","dedupe":"none","cross_volume_dedupe":"none"},"snapshot_policy":{"name":"none"}}
                                                                                                                                                                                  Success -
"Fri Jan 05 01:09:33 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane GET /api/private/cli/volume/efficiency/?vserver=svm&volume=vol1_restored&fields=op_status
                                                                                                                                                                                  Success -
"Fri Jan 05 01:09:33 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane POST /api/private/cli/volume/efficiency/stop : {"volume":"vol1_restored","vserver":"svm","all":true}
                                                                                                                                                                                  Success -
"Fri Jan 05 01:09:43 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 fsx-control-plane PATCH /api/storage/volumes/96a24aa2-ab66-11ee-b1b8-195a72820387 : {"tiering":{"policy":"NONE"},"nas":{"path":"/vol1_restored","security_style":"unix"},"efficiency":{"compression":"none","compaction":"none","dedupe":"none","cross_volume_dedupe":"none"},"snapshot_policy":{"name":"none"}}
                                                                                                                                                                                  Success -
"Fri Jan 05 01:15:24 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 admin             GET /api/private/cli/storage/failover?fields=node,possible,reason       Success -
"Fri Jan 05 01:15:25 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 admin             GET /api/private/cli/storage/aggregate?fields=raidstatus%2Ccomposite%2Croot%2Cuuid
                                                                                                                                                                                  Success -
"Fri Jan 05 01:18:43 2024" FsxId0f1302327a12b6488-01 ssh         FsxId0f1302327a12b6488 fsxadmin          Logging out                                                             Success -
"Fri Jan 05 01:20:24 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 admin             GET /api/private/cli/storage/failover?fields=node,possible,reason       Success -
"Fri Jan 05 01:20:25 2024" FsxId0f1302327a12b6488-01 http        FsxId0f1302327a12b6488 admin             GET /api/private/cli/storage/aggregate?fields=raidstatus%2Ccomposite%2Croot%2Cuuid
                                                                                                                                                                                  Success -
"Fri Jan 05 01:21:29 2024" FsxId0f1302327a12b6488-01 ssh         FsxId0f1302327a12b6488 fsxadmin          Logging in                                                              Success -
"Fri Jan 05 01:21:34 2024" FsxId0f1302327a12b6488-01 ssh         FsxId0f1302327a12b6488 fsxadmin          Question: Warning: These diagnostic command... : y                      Success -
"Fri Jan 05 01:21:34 2024" FsxId0f1302327a12b6488-01 ssh         FsxId0f1302327a12b6488 fsxadmin          set diag                                                                Success -
39 entries were displayed.

リストア後の大阪のFSxNのStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

::*> volume efficiency show
Vserver    Volume           State     Status      Progress           Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm        vol1             Enabled   Idle        Idle for 02:15:58  auto
svm        vol1_restored    Disabled  Idle        Idle for 00:12:10  -
2 entries were displayed.

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   64GB 28.66GB   64GB            60.80GB 32.14GB 52%          0B                 0%                         0B                  32.14GB      53%                  -                 32.14GB             0B                                  0%
svm     vol1_restored
               64GB 31.61GB   64GB            64GB    32.39GB 50%          122.9MB            0%                         8KB                 32.51GB      51%                  -                 32.45GB             0B                                  0%
2 entries were displayed.

::*> volume show-footprint -volume vol1*


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.14GB       4%
             Footprint in Performance Tier            32.27GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Delayed Frees                                   133.8MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 32.58GB       4%

      Footprint Data Reduction                        30.91GB       3%
           Auto Adaptive Compression                  30.91GB       3%
      Effective Total Footprint                        1.67GB       0%


      Vserver : svm
      Volume  : vol1_restored

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.39GB       4%
             Footprint in Performance Tier            32.41GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Deduplication Metadata                          241.0MB       0%
           Temporary Deduplication                    241.0MB       0%
      Delayed Frees                                   25.24MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 32.96GB       4%

      Effective Total Footprint                       32.96GB       4%
2 entries were displayed.

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0f1302327a12b6488-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 129.0GB
                               Total Physical Used: 37.71GB
                    Total Storage Efficiency Ratio: 3.42:1
Total Data Reduction Logical Used Without Snapshots: 64.37GB
Total Data Reduction Physical Used Without Snapshots: 37.68GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.71:1
Total Data Reduction Logical Used without snapshots and flexclones: 64.37GB
Total Data Reduction Physical Used without snapshots and flexclones: 37.68GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.71:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 129.2GB
Total Physical Used in FabricPool Performance Tier: 38.06GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 3.39:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 64.59GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 38.03GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.70:1
                Logical Space Used for All Volumes: 64.37GB
               Physical Space Used for All Volumes: 64.25GB
               Space Saved by Volume Deduplication: 122.9MB
Space Saved by Volume Deduplication and pattern detection: 122.9MB
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 68.34GB
              Physical Space Used by the Aggregate: 37.71GB
           Space Saved by Aggregate Data Reduction: 30.63GB
                 Aggregate Data Reduction SE Ratio: 1.81:1
              Logical Size Used by Snapshot Copies: 64.59GB
             Physical Size Used by Snapshot Copies: 59.45MB
              Snapshot Volume Data Reduction Ratio: 1112.55:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 1112.55:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 1

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     816.1GB   861.8GB 45.68GB  44.18GB       5%                    30.63GB                     40%                                 1.36GB               0B                           30.63GB        40%                     1.36GB           -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             66.56GB         7%
      Aggregate Metadata                             9.75GB         1%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    91.03GB        10%

      Total Physical Used                           44.18GB         5%


      Total Provisioned Space                         129GB        14%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

aggr show-spacePerformance TierTotal Physical Usedが12.08GBから44.18GBになっていることからaggregateレイヤーでのデータ削減効果は失われていることが分かります。

バックアップ対象のデータがSSDだったかキャパシティプールかどうかは関係ないようです。

Inactive data compressionしていないボリュームでTiering Policy Allにした場合のキャパシティプールストレージのサイズの確認

Tiering Policy Allに変更

リストアしたボリュームはaggregateレイヤーのデータ削減効果が失われていることを確認しました。

この状態でキャパシティプールストレージに階層化するとどのような動きになるのでしょうか。階層化する際に何かしらのデータ削減効果を得られるのか、それとも同じデータサイズで階層化されるのか気になってきました。

リストアしたボリュームのTiering PolicyをAllに変更して、キャパシティプールストレージに階層化します。

::*> volume modify -volume vol1_restored -tiering-policy all
Volume modify successful on volume vol1_restored of Vserver svm.

::*> volume show -volume vol1_restored -fields tiering-
    tiering-policy               tiering-minimum-cooling-days
    tiering-object-tags
::*> volume show -volume vol1_restored -fields tiering-policy
vserver volume        tiering-policy
------- ------------- --------------
svm     vol1_restored all

::*>
::*> volume show-footprint -volume vol1_restored


      Vserver : svm
      Volume  : vol1_restored

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.39GB       4%
             Footprint in Performance Tier            27.06GB      83%
             Footprint in FSxFabricpoolObjectStore
                                                       5.36GB      17%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Deduplication Metadata                          241.0MB       0%
           Temporary Deduplication                    241.0MB       0%
      Delayed Frees                                   27.98MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 32.96GB       4%

      Footprint Data Reduction in capacity tier        5.09GB        -
      Effective Total Footprint                       27.88GB       3%

::*> volume show-footprint -volume vol1_restored


      Vserver : svm
      Volume  : vol1_restored

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.39GB       4%
             Footprint in Performance Tier            16.48GB      51%
             Footprint in FSxFabricpoolObjectStore
                                                      15.94GB      49%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Deduplication Metadata                          241.0MB       0%
           Temporary Deduplication                    241.0MB       0%
      Delayed Frees                                   32.61MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 32.97GB       4%

      Footprint Data Reduction in capacity tier       15.14GB        -
      Effective Total Footprint                       17.83GB       2%

::*> volume show-footprint -volume vol1_restored


      Vserver : svm
      Volume  : vol1_restored

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.39GB       4%
             Footprint in Performance Tier            714.4MB       2%
             Footprint in FSxFabricpoolObjectStore
                                                      31.73GB      98%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Deduplication Metadata                          241.0MB       0%
           Temporary Deduplication                    241.0MB       0%
      Delayed Frees                                   39.63MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 32.98GB       4%

      Footprint Data Reduction in capacity tier       30.14GB        -
      Effective Total Footprint                        2.83GB       0%

::*> volume show-footprint -volume vol1_restored


      Vserver : svm
      Volume  : vol1_restored

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.51GB       4%
             Footprint in Performance Tier            633.2MB       2%
             Footprint in FSxFabricpoolObjectStore
                                                      31.94GB      98%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Deduplication Metadata                          241.0MB       0%
           Temporary Deduplication                    241.0MB       0%
      Delayed Frees                                   41.20MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 33.11GB       4%

      Footprint Data Reduction in capacity tier       30.34GB        -
      Effective Total Footprint                        2.76GB       0%

98%のデータがキャパシティプールストレージに階層化されています。

また、Footprint Data Reduction in capacity tierが30.34GBであることから階層化をする際に何かしらのデータ削減効果が効いている可能性大です。

キャパシティプールストレージに階層化後の大阪のFSxNのStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

::*> volume efficiency show
Vserver    Volume           State     Status      Progress           Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm        vol1             Enabled   Idle        Idle for 02:27:37  auto
svm        vol1_restored    Disabled  Idle        Idle for 00:23:49  -
2 entries were displayed.

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   64GB 28.66GB   64GB            60.80GB 32.14GB 52%          0B                 0%                         0B                  32.14GB      53%                  -                 32.14GB             0B                                  0%
svm     vol1_restored
               64GB 31.48GB   64GB            64GB    32.51GB 50%          122.9MB            0%                         8KB                 32.63GB      51%                  -                 32.45GB             -                                   -
2 entries were displayed.

::*> volume show-footprint -volume vol1*


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.14GB       4%
             Footprint in Performance Tier            32.27GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Delayed Frees                                   133.8MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 32.58GB       4%

      Footprint Data Reduction                        30.91GB       3%
           Auto Adaptive Compression                  30.91GB       3%
      Effective Total Footprint                        1.67GB       0%


      Vserver : svm
      Volume  : vol1_restored

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.51GB       4%
             Footprint in Performance Tier            633.2MB       2%
             Footprint in FSxFabricpoolObjectStore
                                                      31.94GB      98%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Deduplication Metadata                          241.0MB       0%
           Temporary Deduplication                    241.0MB       0%
      Delayed Frees                                   41.20MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 33.11GB       4%

      Footprint Data Reduction in capacity tier       30.34GB        -
      Effective Total Footprint                        2.76GB       0%
2 entries were displayed.

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0f1302327a12b6488-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 129.0GB
                               Total Physical Used: 12.21GB
                    Total Storage Efficiency Ratio: 10.56:1
Total Data Reduction Logical Used Without Snapshots: 64.37GB
Total Data Reduction Physical Used Without Snapshots: 12.15GB
Total Data Reduction Efficiency Ratio Without Snapshots: 5.30:1
Total Data Reduction Logical Used without snapshots and flexclones: 64.37GB
Total Data Reduction Physical Used without snapshots and flexclones: 12.15GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 5.30:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 65.51GB
Total Physical Used in FabricPool Performance Tier: 10.92GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 6.00:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 32.76GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 10.92GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.00:1
                Logical Space Used for All Volumes: 64.37GB
               Physical Space Used for All Volumes: 64.25GB
               Space Saved by Volume Deduplication: 122.9MB
Space Saved by Volume Deduplication and pattern detection: 122.9MB
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 42.84GB
              Physical Space Used by the Aggregate: 12.21GB
           Space Saved by Aggregate Data Reduction: 30.63GB
                 Aggregate Data Reduction SE Ratio: 3.51:1
              Logical Size Used by Snapshot Copies: 64.59GB
             Physical Size Used by Snapshot Copies: 188.2MB
              Snapshot Volume Data Reduction Ratio: 351.39:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 351.39:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 1

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     844.5GB   861.8GB 17.27GB  15.82GB       2%                    30.63GB                     64%                                 1.36GB               1.43GB                       30.63GB        64%                     1.36GB           -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             34.76GB         4%
      Aggregate Metadata                            13.13GB         1%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    62.62GB         7%

      Total Physical Used                           15.82GB         2%


      Total Provisioned Space                         129GB        14%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                  32.21GB          -
      Logical Referenced Capacity                   32.06GB          -
      Logical Unreferenced Capacity                 156.1MB          -
      Space Saved by Storage Efficiency             30.78GB          -

      Total Physical Used                            1.43GB          -



2 entries were displayed.

aggr show-spaceObject Store: FSxFabricpoolObjectStoreTotal Physical Usedが1.43GBであることから、キャパシティプールストレージの物理使用量は論理サイズよりも大幅に少ないことは分かります。

つまりは階層化するタイミングで何らかのデータ削減効果を得られていることが分かります。

さらに4時間弱放置した後の大阪のFSxNのStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

::*> volume efficiency show
Vserver    Volume           State     Status      Progress           Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm        vol1             Enabled   Idle        Idle for 06:04:13  auto
svm        vol1_restored    Disabled  Idle        Idle for 04:00:25  -
2 entries were displayed.

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   64GB 28.66GB   64GB            60.80GB 32.14GB 52%          0B                 0%                         0B                  32.14GB      53%                  -                 32.14GB             0B                                  0%
svm     vol1_restored
               64GB 31.48GB   64GB            64GB    32.51GB 50%          122.9MB            0%                         8KB                 32.63GB      51%                  -                 32.45GB             -                                   -
2 entries were displayed.

::*> volume show-footprint -volume vol1*


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.14GB       4%
             Footprint in Performance Tier            32.27GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Delayed Frees                                   133.8MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 32.58GB       4%

      Footprint Data Reduction                        30.91GB       3%
           Auto Adaptive Compression                  30.91GB       3%
      Effective Total Footprint                        1.67GB       0%


      Vserver : svm
      Volume  : vol1_restored

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.51GB       4%
             Footprint in Performance Tier            633.4MB       2%
             Footprint in FSxFabricpoolObjectStore
                                                      31.94GB      98%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Deduplication Metadata                          241.0MB       0%
           Temporary Deduplication                    241.0MB       0%
      Delayed Frees                                   41.35MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 33.11GB       4%

      Footprint Data Reduction in capacity tier       30.34GB        -
      Effective Total Footprint                        2.77GB       0%
2 entries were displayed.

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0f1302327a12b6488-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 129.0GB
                               Total Physical Used: 11.15GB
                    Total Storage Efficiency Ratio: 11.57:1
Total Data Reduction Logical Used Without Snapshots: 64.37GB
Total Data Reduction Physical Used Without Snapshots: 11.10GB
Total Data Reduction Efficiency Ratio Without Snapshots: 5.80:1
Total Data Reduction Logical Used without snapshots and flexclones: 64.37GB
Total Data Reduction Physical Used without snapshots and flexclones: 11.10GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 5.80:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 65.51GB
Total Physical Used in FabricPool Performance Tier: 9.86GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 6.64:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 32.76GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 9.86GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.32:1
                Logical Space Used for All Volumes: 64.37GB
               Physical Space Used for All Volumes: 64.25GB
               Space Saved by Volume Deduplication: 122.9MB
Space Saved by Volume Deduplication and pattern detection: 122.9MB
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 41.78GB
              Physical Space Used by the Aggregate: 11.15GB
           Space Saved by Aggregate Data Reduction: 30.63GB
                 Aggregate Data Reduction SE Ratio: 3.75:1
              Logical Size Used by Snapshot Copies: 64.59GB
             Physical Size Used by Snapshot Copies: 188.6MB
              Snapshot Volume Data Reduction Ratio: 350.69:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 350.69:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 1

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     846.8GB   861.8GB 15.01GB  13.53GB       1%                    30.63GB                     67%                                 1.36GB               1.43GB                       30.63GB        67%                     1.36GB           -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             34.76GB         4%
      Aggregate Metadata                            10.88GB         1%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    60.37GB         7%

      Total Physical Used                           13.53GB         1%


      Total Provisioned Space                         129GB        14%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                  32.21GB          -
      Logical Referenced Capacity                   32.06GB          -
      Logical Unreferenced Capacity                 156.1MB          -
      Space Saved by Storage Efficiency             30.78GB          -

      Total Physical Used                            1.43GB          -



2 entries were displayed.

キャパシティプールストレージの物理使用量は変わらず1.43GBです。

SSDに書き戻し

次に気になるのは、SSDでInactive data compressionやコンパクションなどaggregateレイヤーのデータ削減効果が効かせずにキャパシティプールに階層化させたデータをSSDに書き戻す際に、キャパシティプールストレージに階層化する際に得られたデータ削減効果を維持できるかです。

既に以下記事で、事前にInactive data compressionやコンパクションが効いているデータについてはSSDに書き戻す際もデータ削減効果を維持できることを確認しています。

もしかして、事前にSSD層でaggregateレイヤーレベルのデータ削減効果を効かせる必要はなかったりするのでしょうか。

SSDに書き戻します。

::*> volume show -volume vol1_restored -fields tiering-policy, cloud-retrieval-policy
vserver volume        tiering-policy cloud-retrieval-policy
------- ------------- -------------- ----------------------
svm     vol1_restored all            default

::*> volume modify -vserver svm -volume vol1_restored -tiering-policy none -cloud-retrieval-policy promote

Warning: The "promote" cloud retrieve policy retrieves all of the cloud data for the specified volume. If the tiering policy is "snapshot-only" then only AFS data is retrieved. If the tiering
         policy is "none" then all data is retrieved. Volume "vol1_restored" in Vserver "svm" is on a FabricPool, and there are approximately 34291929088 bytes tiered to the cloud that will be
         retrieved. Cloud retrieval may take a significant amount of time, and may degrade performance during that time. The cloud retrieve operation may also result in data charges by your
         object store provider.
Do you want to continue? {y|n}: y
Volume modify successful on volume vol1_restored of Vserver svm.

::*> volume show -volume vol1_restored -fields tiering-policy, cloud-retrieval-policy
vserver volume        tiering-policy cloud-retrieval-policy
------- ------------- -------------- ----------------------
svm     vol1_restored none           promote

::*> volume show-footprint -volume vol1_restored


      Vserver : svm
      Volume  : vol1_restored

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.51GB       4%
             Footprint in Performance Tier            633.7MB       2%
             Footprint in FSxFabricpoolObjectStore
                                                      31.94GB      98%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Deduplication Metadata                          241.0MB       0%
           Temporary Deduplication                    241.0MB       0%
      Delayed Frees                                   41.62MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 33.11GB       4%

      Footprint Data Reduction in capacity tier       30.34GB        -
      Effective Total Footprint                        2.77GB       0%

::*> volume show-footprint -volume vol1_restored


      Vserver : svm
      Volume  : vol1_restored

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.51GB       4%
             Footprint in Performance Tier             2.40GB       7%
             Footprint in FSxFabricpoolObjectStore
                                                      30.16GB      93%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Deduplication Metadata                          241.0MB       0%
           Temporary Deduplication                    241.0MB       0%
      Delayed Frees                                   43.81MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 33.11GB       4%

      Footprint Data Reduction in capacity tier       28.65GB        -
      Effective Total Footprint                        4.45GB       0%

::*> volume show-footprint -volume vol1_restored


      Vserver : svm
      Volume  : vol1_restored

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.51GB       4%
             Footprint in Performance Tier             7.03GB      22%
             Footprint in FSxFabricpoolObjectStore
                                                      25.54GB      78%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Deduplication Metadata                          241.0MB       0%
           Temporary Deduplication                    241.0MB       0%
      Delayed Frees                                   51.88MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 33.12GB       4%

      Footprint Data Reduction in capacity tier       24.26GB        -
      Effective Total Footprint                        8.85GB       1%

10分以上待っても22%しか書き戻されていません。

3時間ほど待ちました。

::*> volume show-footprint -volume vol1_restored


      Vserver : svm
      Volume  : vol1_restored

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.52GB       4%
             Footprint in Performance Tier            30.44GB      93%
             Footprint in FSxFabricpoolObjectStore
                                                       2.30GB       7%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Deduplication Metadata                          241.0MB       0%
           Temporary Deduplication                    241.0MB       0%
      Delayed Frees                                   220.4MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 33.29GB       4%

      Footprint Data Reduction in capacity tier        2.19GB        -
      Effective Total Footprint                       31.10GB       3%

まだ93%です。ただ、ようやく終わりが見えてきました。

この時点での大阪のFSxNのStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

::*> volume efficiency show
Vserver    Volume           State     Status      Progress           Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm        vol1             Enabled   Idle        Idle for 09:25:52  auto
svm        vol1_restored    Disabled  Idle        Idle for 07:22:04  -
2 entries were displayed.

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   64GB 28.66GB   64GB            60.80GB 32.14GB 52%          0B                 0%                         0B                  32.14GB      53%                  -                 32.14GB             0B                                  0%
svm     vol1_restored
               64GB 31.47GB   64GB            64GB    32.52GB 50%          122.9MB            0%                         8KB                 32.64GB      51%                  -                 32.46GB             0B                                  0%
2 entries were displayed.

::*> volume show-footprint -volume vol1*


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.14GB       4%
             Footprint in Performance Tier            32.27GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Delayed Frees                                   133.8MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 32.58GB       4%

      Footprint Data Reduction                        30.91GB       3%
           Auto Adaptive Compression                  30.91GB       3%
      Effective Total Footprint                        1.67GB       0%


      Vserver : svm
      Volume  : vol1_restored

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.52GB       4%
             Footprint in Performance Tier            30.45GB      93%
             Footprint in FSxFabricpoolObjectStore
                                                       2.29GB       7%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Deduplication Metadata                          241.0MB       0%
           Temporary Deduplication                    241.0MB       0%
      Delayed Frees                                   220.5MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 33.29GB       4%

      Footprint Data Reduction in capacity tier        2.18GB        -
      Effective Total Footprint                       31.11GB       3%
2 entries were displayed.

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0f1302327a12b6488-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 129.0GB
                               Total Physical Used: 35.82GB
                    Total Storage Efficiency Ratio: 3.60:1
Total Data Reduction Logical Used Without Snapshots: 64.38GB
Total Data Reduction Physical Used Without Snapshots: 35.72GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.80:1
Total Data Reduction Logical Used without snapshots and flexclones: 64.38GB
Total Data Reduction Physical Used without snapshots and flexclones: 35.72GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.80:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 124.7GB
Total Physical Used in FabricPool Performance Tier: 35.87GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 3.48:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 62.33GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 35.77GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.74:1
                Logical Space Used for All Volumes: 64.38GB
               Physical Space Used for All Volumes: 64.26GB
               Space Saved by Volume Deduplication: 122.9MB
Space Saved by Volume Deduplication and pattern detection: 122.9MB
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 66.45GB
              Physical Space Used by the Aggregate: 35.82GB
           Space Saved by Aggregate Data Reduction: 30.63GB
                 Aggregate Data Reduction SE Ratio: 1.86:1
              Logical Size Used by Snapshot Copies: 64.59GB
             Physical Size Used by Snapshot Copies: 188.6MB
              Snapshot Volume Data Reduction Ratio: 350.67:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 350.67:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 1

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     818.8GB   861.8GB 42.94GB  41.46GB       5%                    30.63GB                     42%                                 1.36GB               129.6MB                      30.63GB        42%                     1.36GB           -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             64.59GB         7%
      Aggregate Metadata                             8.98GB         1%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    88.30GB        10%

      Total Physical Used                           41.46GB         5%


      Total Provisioned Space                         129GB        14%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                   2.84GB          -
      Logical Referenced Capacity                    2.71GB          -
      Logical Unreferenced Capacity                 133.3MB          -
      Space Saved by Storage Efficiency              2.71GB          -

      Total Physical Used                           129.6MB          -



2 entries were displayed.

aggr show-spacePerformance TierTotal Physical Usedが13.53GBから41.46GBになっていることから、SSDに書き戻されたデータはaggregateレイヤーでのデータ削減効果は失われていそうです。

トータル4時間弱待ちました。

::*> volume show-footprint -volume vol1_restored


      Vserver : svm
      Volume  : vol1_restored

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.53GB       4%
             Footprint in Performance Tier            32.90GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Deduplication Metadata                          241.0MB       0%
           Temporary Deduplication                    241.0MB       0%
      Delayed Frees                                   378.0MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 33.45GB       4%

      Effective Total Footprint                       33.45GB       4%

ようやく全てのデータをSSDに書き戻せました。

SSDに書き戻したが完了した後の大阪のFSxNのStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

::*> volume efficiency show
Vserver    Volume           State     Status      Progress           Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm        vol1             Enabled   Idle        Idle for 10:05:29  auto
svm        vol1_restored    Disabled  Idle        Idle for 08:01:41  -
2 entries were displayed.

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   64GB 28.66GB   64GB            60.80GB 32.14GB 52%          0B                 0%                         0B                  32.14GB      53%                  -                 32.14GB             0B                                  0%
svm     vol1_restored
               64GB 31.47GB   64GB            64GB    32.53GB 50%          122.9MB            0%                         8KB                 32.65GB      51%                  -                 32.46GB             0B                                  0%
2 entries were displayed.

::*> volume show-footprint -volume vol1*


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.14GB       4%
             Footprint in Performance Tier            32.27GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Delayed Frees                                   133.8MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 32.58GB       4%

      Footprint Data Reduction                        30.91GB       3%
           Auto Adaptive Compression                  30.91GB       3%
      Effective Total Footprint                        1.67GB       0%


      Vserver : svm
      Volume  : vol1_restored

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.53GB       4%
             Footprint in Performance Tier            32.90GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Deduplication Metadata                          241.0MB       0%
           Temporary Deduplication                    241.0MB       0%
      Delayed Frees                                   378.0MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 33.45GB       4%

      Effective Total Footprint                       33.45GB       4%
2 entries were displayed.

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0f1302327a12b6488-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 129.0GB
                               Total Physical Used: 37.79GB
                    Total Storage Efficiency Ratio: 3.41:1
Total Data Reduction Logical Used Without Snapshots: 64.38GB
Total Data Reduction Physical Used Without Snapshots: 37.69GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.71:1
Total Data Reduction Logical Used without snapshots and flexclones: 64.38GB
Total Data Reduction Physical Used without snapshots and flexclones: 37.69GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.71:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 129.2GB
Total Physical Used in FabricPool Performance Tier: 37.95GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 3.40:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 64.60GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 37.85GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.71:1
                Logical Space Used for All Volumes: 64.38GB
               Physical Space Used for All Volumes: 64.26GB
               Space Saved by Volume Deduplication: 122.9MB
Space Saved by Volume Deduplication and pattern detection: 122.9MB
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 68.42GB
              Physical Space Used by the Aggregate: 37.79GB
           Space Saved by Aggregate Data Reduction: 30.63GB
                 Aggregate Data Reduction SE Ratio: 1.81:1
              Logical Size Used by Snapshot Copies: 64.59GB
             Physical Size Used by Snapshot Copies: 188.6MB
              Snapshot Volume Data Reduction Ratio: 350.65:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 350.65:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 1

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     816.8GB   861.8GB 45.00GB  43.52GB       5%                    30.63GB                     41%                                 1.36GB               0B                           30.63GB        41%                     1.36GB           -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             67.04GB         7%
      Aggregate Metadata                             8.59GB         1%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    90.35GB        10%

      Total Physical Used                           43.52GB         5%


      Total Provisioned Space                         129GB        14%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

aggr show-spacePerformance TierTotal Physical Usedが43.52GBであることから、SSDに書き戻されたデータはaggregateレイヤーでのデータ削減効果は失われていることが分かります。

「どうせキャパシティプールストレージに階層化するからInactive data compressionを有効にしなくても良い」という訳ではないですね。システム影響を気にする必要がありますが、SSDに書き戻すことを考えて有効化したほうがSSDの物理消費量を節約できます。

Cost Explorerの確認

2024/1/6 10:06の確認

それではCost Explorerを確認します。

確認時刻は2024/1/6 10:06です。バックアップを取得したのが2024/1/5 9:08ごろなので、一日以上経過しています。

結果は以下のとおりです。

20240106_1006_FSxNバックアップ料金

どちらも0.01GBほどで大きな差はありません。

使用タイプではなくリソースレベルでも確認します。日次でリソースレベルのコストを確認できるようになったのは2023年末のCost Explorerの激アツアップデートによるものです。

結果は以下のとおりです。

20240106_1010_FSxNバックアップ料金_リソース

具体的にどのバックアップで、どのぐらいのコストが発生しているのか分かりやすいですね。

請求書も確認します。

20240106_1016_請求書

こちらはどちらのリージョンのバックアップストレージのサイズも0.035GBになっています。Cost Explorerで表示している値と差があるのは、内部でコスト計算をする際に見ている時間帯が異なるためでしょう。

2024/1/7 8:30の確認

さらに一日ほど時間をおいてみました。

確認時刻は2024/1/7 8:30です。バックアップを取得して約2日経過しています。

結果は以下のとおりです。

20240107_0830_FSxNバックアップ料金_リソース

1/5のバックアップストレージのサイズはどちらも0.05GBとなっています。

請求されるバックアップストレージのコストはの月のバックアップストレージの平均使用量に基づいて計算されます。

バックアップ: バックアップの料金は、消費されたストレージ容量に対して支払います。バックアップは増分バックアップになります。これは保存された最新のバックアップについてのみ料金がかかることを意味します。それで重複データについては料金がカウントされません。その月のバックアップストレージの平均使用量について GB-月単位でお支払いいただきます。

Amazon FSx for NetApp ONTAP の料金 — AWS

そのため、一日のバックアップストレージのサイズが0.05GBなのであれば、月のバックアップストレージの平均使用量は0.05 × 31 = 1.55GBとなります。

バックアップ取得時のaggr show-spaceTotal Physical Usedはそれぞれ以下のとおりです。

  • バージニア北部FSxN (FSxFabricpoolObjectStore) : 2.25GB
  • 大阪FSxN (Performance Tier) 12.30GB

そのため、どちらも求められたバックアップストレージの月平均使用量とは異なります。特に大阪FSxNとは大きく異なっています。

これは気になります。

請求書に記載のバックアップストレージのサイズは0.077GB、0.076GBと大きな違いはありません。

20240107_0837_請求書

SSDでaggregateレイヤーレベルのデータ削減効果が効いていないデータを持つボリュームのバックアップ

全てのデータがSSD上にあるバックアップの場合も、キャパシティプールストレージの場合のバックアップも同じサイズでした。

もしかすると、キャパシティプールストレージに階層化する際に追加のデータ削減効果が効くようにバックアップストレージでもデータ削減効果が効くのでしょうか。

「Inactive data compressionを実行しない」かつ「Tiering Policy Noneのボリューム」についてバックアップのサイズを確認することで切り分けしてみます。

もし、先に検証したバックアップと同じサイズなのであれば、バックアップストレージでも追加のデータ削減効果が効くと言えると考えます。

マネジメントコンソールからバックアップを取得します。バックアップ対象は大阪FSxN上にリストアしたボリュームです。

$ aws fsx describe-backups --backup-ids backup-0992ff0808438d595 --region ap-northeast-3
{
    "Backups": [
        {
            "BackupId": "backup-0992ff0808438d595",
            "Lifecycle": "AVAILABLE",
            "Type": "USER_INITIATED",
            "ProgressPercent": 100,
            "CreationTime": "2024-01-07T09:03:23.467000+00:00",
            "KmsKeyId": "arn:aws:kms:ap-northeast-3:<AWSアカウントID>:key/cc5bc947-b9fa-4614-8f7d-8ab0b5778679",
            "ResourceARN": "arn:aws:fsx:ap-northeast-3:<AWSアカウントID>:backup/backup-0992ff0808438d595",
            "Tags": [
                {
                    "Key": "Name",
                    "Value": "non-97-backup-no-inactive-data-compression"
                }
            ],
            "OwnerId": "<AWSアカウントID>",
            "ResourceType": "VOLUME",
            "Volume": {
                "FileSystemId": "fs-0f1302327a12b6488",
                "Lifecycle": "ACTIVE",
                "Name": "vol1_restored",
                "OntapConfiguration": {
                    "JunctionPath": "/vol1_restored",
                    "SizeInMegabytes": 65536,
                    "StorageEfficiencyEnabled": false,
                    "StorageVirtualMachineId": "svm-0a7e0e36f5d9aebb9",
                    "TieringPolicy": {
                        "Name": "NONE"
                    },
                    "CopyTagsToBackups": false,
                    "VolumeStyle": "FLEXVOL",
                    "SizeInBytes": 68719476736
                },
                "ResourceARN": "arn:aws:fsx:ap-northeast-3:<AWSアカウントID>:volume/fsvol-07c86bb48288498f5",
                "VolumeId": "fsvol-07c86bb48288498f5",
                "VolumeType": "ONTAP"
            }
        }
    ]
}

ProgressPercentが100になるまでにちょうど4分かかりました。

2024/1/9 9:27の確認

バックアップを取得して1日半放置しました。

確認時刻は2024/1/9 9:27です。追加のバックアップを取得したのは2024/1/7 18:03です。

結果は以下のとおりです。

20240109_0927_FSxNバックアップ料金_リソース

紫色の棒グラフが追加のバックアップです。1/8のバックアップストレージのコストがオレンジ、緑と先に取得したバックアップのものと一致しています。

バックアップ時のボリュームの使用量は32GBです。他のボリュームのバックアップと大きく使用量が異なるのに、バックアップストレージのサイズがほぼ等しいことから以下であると考えます。

  • バックアップストレージで追加のデータ削減効果が効く
  • その際、FSxNファイルシステムのaggregateレイヤーでのデータ削減効果はバックアップストレージ上で無視される

そのため、Inactive data compressionの実行の目的がバックアップストレージのコスト削減なのであれば、圧縮をしたとてもバックアップストレージのサイズに影響を与えないため、目的は達成できないと考えます。

月別のバックアップストレージのサイズも確認してみます。

20240109_0927_FSxNバックアップ料金_リソース_月

日別のバックアップストレージのサイズの合計であることが分かります。

参考までに請求書に記載のバックアップストレージのコストは以下のとおりです。

20240109_0927_請求書

2024/1/12 追記 : 圧縮されにくいデータが保存されているボリュームのバックアップストレージサイズの確認

ボリュームの作成

ふと、圧縮されにくいデータが保存されているボリュームのバックアップストレージサイズはどうなるのか気になりました。

圧縮されにくいのであれば、先のバックアップよりもバックアップストレージのサイズは大きくなるのではと考えます。

確認してみます。

まず、ボリュームを作成します。ボリュームはTiering Policy AllのものとNoneのものの2つ用意します。

::> volume create -vserver svm -volume vol_tiering_all -aggregate aggr1 -size 64GB -state online -type RW -tiering-policy all
[Job 105] Job succeeded: Successful

::> volume create -vserver svm -volume vol_tiering_none -aggregate aggr1 -size 64GB -state online -type RW -tiering-policy none
[Job 107] Job succeeded: Successful

::> volume mount -vserver svm -volume vol_tiering_all -junction-path /vol_tiering_all

::> volume mount -vserver svm -volume vol_tiering_none -junction-path /vol_tiering_none

::> volume show vol_*
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
svm       vol_tiering_all
                       aggr1        online     RW         64GB    60.80GB    0%
svm       vol_tiering_none
                       aggr1        online     RW         64GB    60.80GB    0%
2 entries were displayed.

大阪のFSxNのStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

::> set diag

Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y

::*> volume efficiency show
Vserver    Volume           State     Status      Progress           Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm        vol1             Enabled   Idle        Idle for 104:01:54 auto
svm        vol1_restored    Disabled  Idle        Idle for 101:58:06 -
svm        vol_tiering_all  Enabled   Idle        Idle for 00:03:01  auto
svm        vol_tiering_none Enabled   Idle        Idle for 00:02:45  auto
4 entries were displayed.

::*> volume show -volume vol_* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume          size available filesystem-size total   used  percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- --------------- ---- --------- --------------- ------- ----- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol_tiering_all 64GB 60.80GB   64GB            60.80GB 320KB 0%           0B                 0%                         0B                  320KB        0%                   -    320KB               -                                   -
svm     vol_tiering_none
                        64GB 60.80GB   64GB            60.80GB 320KB 0%           0B                 0%                         0B                  320KB        0%                   -    320KB               -                                   -
2 entries were displayed.

::*> volume show-footprint -volume vol_*


      Vserver : svm
      Volume  : vol_tiering_all

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                             500KB       0%
             Footprint in Performance Tier             1.86MB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        107.5MB       0%
      Delayed Frees                                    1.37MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 109.3MB       0%

      Effective Total Footprint                       109.3MB       0%


      Vserver : svm
      Volume  : vol_tiering_none

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                             480KB       0%
             Footprint in Performance Tier             2.13MB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        107.5MB       0%
      Delayed Frees                                    1.66MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 109.6MB       0%

      Effective Total Footprint                       109.6MB       0%
2 entries were displayed.

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0f1302327a12b6488-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 129.0GB
                               Total Physical Used: 36.05GB
                    Total Storage Efficiency Ratio: 3.58:1
Total Data Reduction Logical Used Without Snapshots: 64.36GB
Total Data Reduction Physical Used Without Snapshots: 36.05GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.79:1
Total Data Reduction Logical Used without snapshots and flexclones: 64.36GB
Total Data Reduction Physical Used without snapshots and flexclones: 36.05GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.79:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 129.2GB
Total Physical Used in FabricPool Performance Tier: 36.22GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 3.57:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 64.59GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 36.21GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.78:1
                Logical Space Used for All Volumes: 64.36GB
               Physical Space Used for All Volumes: 64.24GB
               Space Saved by Volume Deduplication: 122.9MB
Space Saved by Volume Deduplication and pattern detection: 122.9MB
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 66.69GB
              Physical Space Used by the Aggregate: 36.05GB
           Space Saved by Aggregate Data Reduction: 30.63GB
                 Aggregate Data Reduction SE Ratio: 1.85:1
              Logical Size Used by Snapshot Copies: 64.60GB
             Physical Size Used by Snapshot Copies: 11.30MB
              Snapshot Volume Data Reduction Ratio: 5855.97:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 5855.97:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 1

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     821.6GB   861.8GB 40.13GB  38.43GB       4%                    30.63GB                     43%                                 1.36GB               0B                           30.63GB       43%                     1.36GB           -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             67.27GB         7%
      Aggregate Metadata                             3.50GB         0%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    85.49GB         9%

      Total Physical Used                           38.43GB         4%


      Total Provisioned Space                         257GB        28%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

ランダムなデータブロックのバイナリファイルの追加

ランダムなデータブロックを持つバイナリファイルを追加します。

まず、Tiering Policy Allのボリュームに書き込みます。

$ sudo mkdir -p /mnt/fsxn/vol_tiering_all
$ sudo mount -t nfs svm-0a7e0e36f5d9aebb9.fs-0f1302327a12b6488.fsx.ap-northeast-3.amazonaws.com:/vol_tiering_all /mnt/fsxn/vol_tiering_all

$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol_tiering_all/random_pattern_binary_block_32GiB bs=1M count=32768
32768+0 records in
32768+0 records out
34359738368 bytes (34 GB, 32 GiB) copied, 265.21 s, 130 MB/s

Tiering Policy Noneのボリュームにも書き込みます。

$ sudo mkdir -p /mnt/fsxn/vol_tiering_none
sh-5.2$ sudo mount -t nfs svm-0a7e0e36f5d9aebb9.fs-0f1302327a12b6488.fsx.ap-northeast-3.amazonaws.com:/vol_tiering_none /mnt/fsxn/vol_tiering_none

sh-5.2$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol_tiering_none/random_pattern_binary_block_32GiB bs=1M count=32768
32768+0 records in
32768+0 records out
34359738368 bytes (34 GB, 32 GiB) copied, 263.074 s, 131 MB/s

書き込み後の大阪のFSxNのStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。

::*> volume efficiency show
Vserver    Volume           State     Status      Progress           Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm        vol1             Enabled   Idle        Idle for 104:23:16 auto
svm        vol1_restored    Disabled  Idle        Idle for 102:19:28 -
svm        vol_tiering_all  Enabled   Idle        Idle for 00:08:17  auto
svm        vol_tiering_none Enabled   Idle        Idle for 00:01:04  auto
4 entries were displayed.

::*> volume show -volume vol_* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume          size available filesystem-size total   used    percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- --------------- ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol_tiering_all 64GB 28.30GB   64GB            60.80GB 32.50GB 53%          248.8MB            1%                         248.8MB             32.74GB      54%                  -      32.74GB             -                                   -
svm     vol_tiering_none
                        64GB 28.31GB   64GB            60.80GB 32.49GB 53%          0B                 0%                         0B                  32.49GB      53%                  -      32.49GB             0B                                  0%
2 entries were displayed.

::*> volume show-footprint -volume vol_*


      Vserver : svm
      Volume  : vol_tiering_all

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.50GB       4%
             Footprint in Performance Tier            680.9MB       2%
             Footprint in FSxFabricpoolObjectStore
                                                         32GB      98%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Deduplication Metadata                          192.4MB       0%
           Deduplication                              192.4MB       0%
      Delayed Frees                                   171.9MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 33.17GB       4%

      Effective Total Footprint                       33.17GB       4%


      Vserver : svm
      Volume  : vol_tiering_none

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                           32.49GB       4%
             Footprint in Performance Tier            32.51GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        322.4MB       0%
      Deduplication Metadata                          42.41MB       0%
           Deduplication                              42.41MB       0%
      Delayed Frees                                   23.80MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                 32.87GB       4%

      Effective Total Footprint                       32.87GB       4%
2 entries were displayed.

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0f1302327a12b6488-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 193.6GB
                               Total Physical Used: 101.8GB
                    Total Storage Efficiency Ratio: 1.90:1
Total Data Reduction Logical Used Without Snapshots: 129.0GB
Total Data Reduction Physical Used Without Snapshots: 101.8GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.27:1
Total Data Reduction Logical Used without snapshots and flexclones: 129.0GB
Total Data Reduction Physical Used without snapshots and flexclones: 101.8GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.27:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 162.3GB
Total Physical Used in FabricPool Performance Tier: 70.69GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 2.30:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 97.75GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 70.68GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.38:1
                Logical Space Used for All Volumes: 129.0GB
               Physical Space Used for All Volumes: 128.7GB
               Space Saved by Volume Deduplication: 371.8MB
Space Saved by Volume Deduplication and pattern detection: 371.8MB
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 132.4GB
              Physical Space Used by the Aggregate: 101.8GB
           Space Saved by Aggregate Data Reduction: 30.63GB
                 Aggregate Data Reduction SE Ratio: 1.30:1
              Logical Size Used by Snapshot Copies: 64.60GB
             Physical Size Used by Snapshot Copies: 11.43MB
              Snapshot Volume Data Reduction Ratio: 5787.92:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 5787.92:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 1

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     780.6GB   861.8GB 81.18GB  79.19GB       9%                    30.63GB                     27%                                 1.36GB               32.28GB                      30.63GB       27%                     1.36GB           -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             101.1GB        11%
      Aggregate Metadata                            10.71GB         1%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    126.5GB        14%

      Total Physical Used                           79.19GB         9%


      Total Provisioned Space                         257GB        28%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                  32.28GB          -
      Logical Referenced Capacity                   32.12GB          -
      Logical Unreferenced Capacity                 159.8MB          -

      Total Physical Used                           32.28GB          -



2 entries were displayed.

バックアップ

バックアップを取得します。

まず、Tiering Policy Allのボリュームのバックアップです。

バックアップ取得の開始をしたらバックアップ終了までの時間を計測します。

$ backup_id=backup-0c4d4b505ff586c1a

$ while true; do
  date

  progress_percent=$(aws fsx describe-backups \
    --backup-ids "$backup_id" \
    --query 'Backups[].ProgressPercent' \
    --output text \
    --region ap-northeast-3
  )

  echo "Backup progress percent : ${progress_percent}"

  if [[ $progress_percent == 100 ]] ; then
    break
  else
    echo "-------------------"
  fi

  sleep 10
done
Tue Jan  9 07:43:09 AM UTC 2024
Backup progress percent : 
-------------------
Tue Jan  9 07:43:20 AM UTC 2024
Backup progress percent : 
-------------------
Tue Jan  9 07:43:31 AM UTC 2024
Backup progress percent : 0
-------------------
Tue Jan  9 07:43:42 AM UTC 2024
Backup progress percent : 9
-------------------
.
.
(中略)
.
.
-------------------
Tue Jan  9 07:45:52 AM UTC 2024
Backup progress percent : 55
-------------------
.
.
(中略)
.
.
-------------------
Tue Jan  9 07:47:51 AM UTC 2024
Backup progress percent : 99
-------------------
.
.
(中略)
.
.
-------------------
Tue Jan  9 07:49:51 AM UTC 2024
Backup progress percent : 100

$ aws fsx describe-backups \
    --backup-ids "$backup_id" \
    --region ap-northeast-3
{
    "Backups": [
        {
            "BackupId": "backup-0c4d4b505ff586c1a",
            "Lifecycle": "AVAILABLE",
            "Type": "USER_INITIATED",
            "ProgressPercent": 100,
            "CreationTime": "2024-01-09T07:42:47.120000+00:00",
            "KmsKeyId": "arn:aws:kms:ap-northeast-3:<AWSアカウントID>:key/cc5bc947-b9fa-4614-8f7d-8ab0b5778679",
            "ResourceARN": "arn:aws:fsx:ap-northeast-3:<AWSアカウントID>:backup/backup-0c4d4b505ff586c1a",
            "Tags": [
                {
                    "Key": "Name",
                    "Value": "non-97-backup-tiering-policy-all-random"
                }
            ],
            "OwnerId": "<AWSアカウントID>",
            "ResourceType": "VOLUME",
            "Volume": {
                "FileSystemId": "fs-0f1302327a12b6488",
                "Lifecycle": "ACTIVE",
                "Name": "vol_tiering_all",
                "OntapConfiguration": {
                    "JunctionPath": "/vol_tiering_all",
                    "SizeInMegabytes": 65536,
                    "StorageEfficiencyEnabled": true,
                    "StorageVirtualMachineId": "svm-0a7e0e36f5d9aebb9",
                    "TieringPolicy": {
                        "Name": "ALL"
                    },
                    "CopyTagsToBackups": false,
                    "VolumeStyle": "FLEXVOL",
                    "SizeInBytes": 68719476736
                },
                "ResourceARN": "arn:aws:fsx:ap-northeast-3:<AWSアカウントID>:volume/fsvol-01008a3bc02253bc1",
                "VolumeId": "fsvol-01008a3bc02253bc1",
                "VolumeType": "ONTAP"
            }
        }
    ]
}

7分ほどかかりました。

Tiering Policy Noneのボリュームもバックアップします。

バックアップ取得の開始をしたらバックアップ終了までの時間を計測します。

backup_id=backup-04b5578fff9a10f33

while true; do
  date

  progress_percent=$(aws fsx describe-backups \
    --backup-ids "$backup_id" \
    --query 'Backups[].ProgressPercent' \
    --output text \
    --region ap-northeast-3
  )

  echo "Backup progress percent : ${progress_percent}"

  if [[ $progress_percent == 100 ]] ; then
    break
  else
    echo "-------------------"
  fi

  sleep 10
done
Tue Jan  9 07:53:58 AM UTC 2024
Backup progress percent : 
-------------------
Tue Jan  9 07:54:09 AM UTC 2024
Backup progress percent : 
-------------------
Tue Jan  9 07:54:20 AM UTC 2024
Backup progress percent : 
-------------------
Tue Jan  9 07:54:31 AM UTC 2024
Backup progress percent : 0
-------------------
.
.
(中略)
.
.
-------------------
Tue Jan  9 07:55:57 AM UTC 2024
Backup progress percent : 63
-------------------
.
.
(中略)
.
.
-------------------
Tue Jan  9 07:58:08 AM UTC 2024
Backup progress percent : 99
-------------------
.
.
(中略)
.
.
-------------------
Tue Jan  9 08:00:29 AM UTC 2024
Backup progress percent : 100

$ aws fsx describe-backups \
    --backup-ids "$backup_id" \
    --region ap-northeast-3
{
    "Backups": [
        {
            "BackupId": "backup-04b5578fff9a10f33",
            "Lifecycle": "AVAILABLE",
            "Type": "USER_INITIATED",
            "ProgressPercent": 100,
            "CreationTime": "2024-01-09T07:53:41.788000+00:00",
            "KmsKeyId": "arn:aws:kms:ap-northeast-3:<AWSアカウントID>:key/cc5bc947-b9fa-4614-8f7d-8ab0b5778679",
            "ResourceARN": "arn:aws:fsx:ap-northeast-3:<AWSアカウントID>:backup/backup-04b5578fff9a10f33",
            "Tags": [
                {
                    "Key": "Name",
                    "Value": "non-97-backup-tiering-policy-none-random"
                }
            ],
            "OwnerId": "<AWSアカウントID>",
            "ResourceType": "VOLUME",
            "Volume": {
                "FileSystemId": "fs-0f1302327a12b6488",
                "Lifecycle": "ACTIVE",
                "Name": "vol_tiering_none",
                "OntapConfiguration": {
                    "JunctionPath": "/vol_tiering_none",
                    "SizeInMegabytes": 65536,
                    "StorageEfficiencyEnabled": true,
                    "StorageVirtualMachineId": "svm-0a7e0e36f5d9aebb9",
                    "TieringPolicy": {
                        "Name": "NONE"
                    },
                    "CopyTagsToBackups": false,
                    "VolumeStyle": "FLEXVOL",
                    "SizeInBytes": 68719476736
                },
                "ResourceARN": "arn:aws:fsx:ap-northeast-3:<AWSアカウントID>:volume/fsvol-03bdfc54227fc8f6b",
                "VolumeId": "fsvol-03bdfc54227fc8f6b",
                "VolumeType": "ONTAP"
            }
        }
    ]
}

こちらも7分ほどかかりました。

Cost Explorerからバックアップストレージのサイズを確認

Cost Explorerからバックアップストレージのサイズを確認します。

現在の時刻は2024/1/12 9:45です。バックアップを取得したのは204/1/9 16:42と16:53なので2日以上経過しています。

結果は以下のとおりです。

20240112_0945_FSxNバックアップ料金_リソース

青とピンクのグラフが今回追加で取得したバックアップです。

1/10のバックアップストレージのサイズはどちらも1.05GBとなっています。1.05GB × 31日 = 32.55GBであるため、バックアップストレージのサイズは書き込んだファイルサイズとほぼ一致しています。

そのため、圧縮されにくいデータはデータ削減されずにバックアップストレージ上で保持されるということが分かりました。

また、バックアップ対象のデータがSSD上かキャパシティプールストレージ上にあるかはバックアップストレージのサイズに影響を与えないことも分かりました。

Tiering Policy Allのバックアップストレージのコストを削減するためだけにaggregateレイヤーのデータ削減を頑張る必要はなさそう

Amazon FSx for NetApp ONTAPのキャパシティプールストレージ上でデータが圧縮されている場合FSxのバックアップの課金対象は解凍後のデータサイズになるのか確認してみました。

結論としては、以下になります。

  • バックアップストレージのサイズはInactive data compressionやコンパクションなどのaggregateレイヤーのデータ削減のundo後のデータサイズではない
  • バックアップストレージ上でかかるデータ削減効果が効いた後のサイズとなる
  • バックアップ時にFSxNファイルシステムのaggregateレイヤーでのデータ削減効果はバックアップストレージ上で無視される

コストに関わるところなので、ここは仕様として書いて欲しいですね。

Tiering Policy Allのバックアップストレージのコストを削減するためだけにaggregateレイヤーのデータ削減を頑張る必要はなさそうです。

ただし、Tiering PolicyがAll以外の場合はSSDの物理消費量を抑えることにつながりますし、キャパシティプールに階層化された後にSSDに書き戻された際のSSDに物理消費量を抑えるという意味でInactive data compressionを有効化する意味は非常にあると考えます。

バックアップストレージのコストが高額で悩んでいるのであれば以下対応が考えられます。

  1. 重複排除による物理的なボリューム使用量の削減
  2. FSxのバックアップではなく、FSxNファイルシステムを追加してSnapMirrorでバックアップ

aggregateレイヤーのデータ削減効果は効かなくても重複排除が効くことは考えられます。かなり大掛かりですが、Tiering Policy Allのボリュームの場合でも以下記事を参考にすることで重複排除を効かせることが可能です。

後者についてはFSxNファイルシステムに保存されているデータがTB以上とある程度の規模である場合に採用するパターンです。

キャパシティプールストレージとバックアップストレージのコスト差を利用したものになります。

以下のようにキャパシティプールストレージはバックアップストレージよりも安いです。

  • キャパシティプールストレージ (Single-AZ) : 0.0238USD/GB
  • キャパシティプールストレージ (Multi-AZ) : 0.0476USD/GB
  • バックアップストレージ : 0.050USD/GB

SnapMirrorの転送先ボリュームをTiering Policy AllとすることでFSxのバックアップ機能を使うよりもコストを削減できます。

ただし、SnapMirrorの転送先のFSxNファイルシステムを追加することになるため、その分のコスト増加が発生します。そのため、ある程度の規模でなければFSxのバックアップ機能を使った方が安く済むでしょう。

この記事が誰かの助けになれば幸いです。

以上、AWS事業本部 コンサルティング部の のんピ(@non____97)でした!

Share this article

facebook logohatena logotwitter logo

© Classmethod, Inc. All rights reserved.